The Future of College? – by Graeme Wood

A brash tech entrepreneur thinks he can reinvent higher education by stripping it down to its essence, eliminating lectures and tenure along with football games, ivy-covered buildings, and research libraries. What if he’s right?

On a Friday morning in April, I strapped on a headset, leaned into a microphone, and experienced what had been described to me as a type of time travel to the future of higher education. I was on the ninth floor of a building in downtown San Francisco, in a neighborhood whose streets are heavily populated with winos and vagrants, and whose buildings host hip new businesses, many of them tech start-ups. In a small room, I was flanked by a publicist and a tech manager from an educational venture called the Minerva Project, whose founder and CEO, the 39-year-old entrepreneur Ben Nelson, aims to replace (or, when he is feeling less aggressive, “reform”) the modern liberal-arts college.

Minerva is an accredited university with administrative offices and a dorm in San Francisco, and it plans to open locations in at least six other major world cities. But the key to Minerva, what sets it apart most jarringly from traditional universities, is a proprietary online platform developed to apply pedagogical practices that have been studied and vetted by one of the world’s foremost psychologists, a former Harvard dean named Stephen M. Kosslyn, who joined Minerva in 2012.

Nelson and Kosslyn had invited me to sit in on a test run of the platform, and at first it reminded me of the opening credits of The Brady Bunch: a grid of images of the professor and eight “students” (the others were all Minerva employees) appeared on the screen before me, and we introduced ourselves. For a college seminar, it felt impersonal, and though we were all sitting on the same floor of Minerva’s offices, my fellow students seemed oddly distant, as if piped in from the International Space Station. I half expected a packet of astronaut ice cream to float by someone’s face.

Within a few minutes, though, the experience got more intense. The subject of the class—one in a series during which the instructor, a French physicist named Eric Bonabeau, was trying out his course material—was inductive reasoning. Bonabeau began by polling us on our understanding of the reading, a Nature article about the sudden depletion of North Atlantic cod in the early 1990s. He asked us which of four possible interpretations of the article was the most accurate. In an ordinary undergraduate seminar, this might have been an occasion for timid silence, until the class’s biggest loudmouth or most caffeinated student ventured a guess. But the Minerva class extended no refuge for the timid, nor privilege for the garrulous. Within seconds, every student had to provide an answer, and Bonabeau displayed our choices so that we could be called upon to defend them.

Bonabeau led the class like a benevolent dictator, subjecting us to pop quizzes, cold calls, and pedagogical tactics that during an in-the-flesh seminar would have taken precious minutes of class time to arrange. He split us into groups to defend opposite propositions—that the cod had disappeared because of overfishing, or that other factors were to blame. No one needed to shuffle seats; Bonabeau just pushed a button, and the students in the other group vanished from my screen, leaving my three fellow debaters and me to plan, using a shared bulletin board on which we could record our ideas. Bonabeau bounced between the two groups to offer advice as we worked. After a representative from each group gave a brief presentation, Bonabeau ended by showing a short video about the evils of overfishing. (“Propaganda,” he snorted, adding that we’d talk about logical fallacies in the next session.) The computer screen blinked off after 45 minutes of class.

The system had bugs—it crashed once, and some of the video lagged—but overall it worked well, and felt decidedly unlike a normal classroom. For one thing, it was exhausting: a continuous period of forced engagement, with no relief in the form of time when my attention could flag or I could doodle in a notebook undetected. Instead, my focus was directed relentlessly by the platform, and because it looked like my professor and fellow edu-nauts were staring at me, I was reluctant to ever let my gaze stray from the screen. Even in moments when I wanted to think about aspects of the material that weren’t currently under discussion—to me these seemed like moments of creative space, but perhaps they were just daydreams—I felt my attention snapped back to the narrow issue at hand, because I had to answer a quiz question or articulate a position. I was forced, in effect, to learn. If this was the education of the future, it seemed vaguely fascistic. Good, but fascistic.

Minerva, which operates for profit, started teaching its inaugural class of 33 students this month. To seed this first class with talent, Minerva gave every admitted student a full-tuition scholarship of $10,000 a year for four years, plus free housing in San Francisco for the first year. Next year’s class is expected to have 200 to 300 students, and Minerva hopes future classes will double in size roughly every year for a few years after that.

Those future students will pay about $28,000 a year, including room and board, a $30,000 savings over the sticker price of many of the schools—the Ivies, plus other hyperselective colleges like Pomona and Williams—with which Minerva hopes to compete. (Most American students at these colleges do not pay full price, of course; Minerva will offer financial aid and target middle-class students whose bills at the other schools would still be tens of thousands of dollars more per year.) If Minerva grows to 2,500 students a class, that would mean an annual revenue of up to $280 million. A partnership with the Keck Graduate Institute in Claremont, California, allowed Minerva to fast-track its accreditation, and its advisory board has included Larry Summers, the former U.S. Treasury secretary and Harvard president, and Bob Kerrey, the former Democratic senator from Nebraska, who also served as the president of the New School, in New York City.

Nelson’s long-term goal for Minerva is to radically remake one of the most sclerotic sectors of the U.S. economy, one so shielded from the need for improvement that its biggest innovation in the past 30 years has been to double its costs and hire more administrators at higher salaries.

The paradox of undergraduate education in the United States is that it is the envy of the world, but also tremendously beleaguered. In that way it resembles the U.S. health-care sector. Both carry price tags that shock the conscience of citizens of other developed countries. They’re both tied up inextricably with government, through student loans and federal research funding or through Medicare. But if you can afford the Mayo Clinic, the United States is the best place in the world to get sick. And if you get a scholarship to Stanford, you should take it, and turn down offers from even the best universities in Europe, Australia, or Japan. (Most likely, though, you won’t get that scholarship. The average U.S. college graduate in 2014 carried $33,000 of debt.)

Financial dysfunction is only the most obvious way in which higher education is troubled. In the past half millennium, the technology of learning has hardly budged. The easiest way to picture what a university looked like 500 years ago is to go to any large university today, walk into a lecture hall, and imagine the professor speaking Latin and wearing a monk’s cowl. The most common class format is still a professor standing in front of a group of students and talking. And even though we’ve subjected students to lectures for hundreds of years, we have no evidence that they are a good way to teach. (One educational psychologist, Ludy Benjamin, likens lectures to Velveeta cheese—something lots of people consume but no one considers either delicious or nourishing.)

In recent years, other innovations in higher education have preceded Minerva, most famously massive open online courses, known by the unfortunate acronym MOOCs. Among the most prominent MOOC purveyors are Khan Academy, the brainchild of the entrepreneur Salman Khan, and Coursera, headed by the Stanford computer scientists Andrew Ng and Daphne Koller. Khan Academy began as a way to tutor children in math, but it has grown to include a dazzling array of tutorials, some very effective, many on technical subjects. Coursera offers college-level classes for free (you can pay for premium services, like actual college credit). There can be hundreds of thousands of students in a single course, and millions are enrolled altogether. At their most basic, these courses consist of standard university lectures, caught on video.

But Minerva is not a MOOC provider. Its courses are not massive (they’re capped at 19 students), open (Minerva is overtly elitist and selective), or online, at least not in the same way Coursera’s are. Lectures are banned. All Minerva classes take the form of seminars conducted on the platform I tested. The first students will by now have moved into Minerva’s dorm on the fifth floor of a building in San Francisco’s Nob Hill neighborhood and begun attending class on Apple laptops they were required to supply themselves.

Each year, according to Minerva’s plan, they’ll attend university in a different place, so that after four years they’ll have the kind of international experience that other universities advertise but can rarely deliver. By 2016, Berlin and Buenos Aires campuses will have opened. Likely future cities include Mumbai, Hong Kong, New York, and London. Students will live in dorms with two-person rooms and a communal kitchen. They’ll also take part in field trips organized by Minerva, such as a tour of Alcatraz with a prison psychologist. Minerva will maintain almost no facilities other than the dorm itself—no library, no dining hall, no gym—and students will use city parks and recreation centers, as well as other local cultural resources, for their extracurricular activities.

The professors can live anywhere, as long as they have an Internet connection. Given that many academics are coastal-elite types who refuse to live in places like Evansville, Indiana, geographic freedom is a vital part of Minerva’s faculty recruitment.

The student body could become truly global, in part because Minerva’s policy is to admit students without regard to national origin, thus catering to the unmet demand of, say, prosperous Chinese and Indians and Brazilians for American-style liberal-arts education.

The Minerva boast is that it will strip the university experience down to the aspects that are shown to contribute directly to student learning. Lectures, gone. Tenure, gone. Gothic architecture, football, ivy crawling up the walls—gone, gone, gone. What’s left will be leaner and cheaper. (Minerva has already attracted $25 million in capital from investors who think it can undercut the incumbents.) And Minerva officials claim that their methods will be tested against scientifically determined best practices, unlike the methods used at other universities and assumed to be sound just because the schools themselves are old and expensive. Yet because classes have only just begun, we have little clue as to whether the process of stripping down the university removes something essential to what has made America’s best colleges the greatest in the world.

Minerva will, after all, look very little like a university—and not merely because it won’t be accessorized in useless and expensive ways. The teaching methods may well be optimized, but universities, as currently constituted, are only partly about classroom time. Can a school that has no faculty offices, research labs, community spaces for students, or professors paid to do scholarly work still be called a university?

If Minerva fails, it will lay off its staff and sell its office furniture and never be heard from again. If it succeeds, it could inspire a legion of entrepreneurs, and a whole category of legacy institutions might have to liquidate. One imagines tumbleweeds rolling through abandoned quads and wrecking balls smashing through the windows of classrooms left empty by students who have plugged into new online platforms.

The decor in the lobby of the Minerva office building nods to the classical roots of education: enormous Roman statues dominate. (Minerva is the Roman goddess of wisdom.) But where Minerva’s employees work, on the ninth floor, the atmosphere is pure business, in a California-casual sort of way. Everyone, including the top officers of the university, works at open-plan stations. I associate scholars’ offices with chalk dust, strewn papers, and books stacked haphazardly in contravention of fire codes. But here, I found tidiness.

One of the Minerva employees least scholarly in demeanor is its founder, chief executive, and principal evangelist. Ben Nelson attended the University of Pennsylvania’s Wharton School as an undergraduate in the late 1990s and then had no further contact with academia before he began incubating Minerva, in 2010. His résumé’s main entry is his 10-year stint as an executive at Snapfish, an online photo service that allows users to print pictures on postcards and in books.

Nelson is curly-haired and bespectacled, and when I met him he wore a casual button-down shirt with no tie or jacket. His ambition to reform academia was born of his own undergraduate experience. At Wharton, he was dissatisfied with what he perceived as a random barrage of business instruction, with no coordination to ensure that he learned bedrock skills like critical thinking. “My entire critique of higher education started with curricular reform at Penn,” he says. “General education is nonexistent. It’s effectively a buffet, and when you have a noncurated academic experience, you effectively don’t get educated. You get a random collection of information. Liberal-arts education is about developing the intellectual capacity of the individual, and learning to be a productive member of society. And you cannot do that without a curriculum.”

Students begin their Minerva education by taking the same four “Cornerstone Courses,” which introduce core concepts and ways of thinking that cut across the sciences and humanities. These are not 101 classes, meant to impart freshman-level knowledge of subjects. (“The freshman year [as taught at traditional schools] should not exist,” Nelson says, suggesting that MOOCs can teach the basics. “Do your freshman year at home.”) Instead, Minerva’s first-year classes are designed to inculcate what Nelson calls “habits of mind” and “foundational concepts,” which are the basis for all sound systematic thought. In a science class, for example, students should develop a deep understanding of the need for controlled experiments. In a humanities class, they need to learn the classical techniques of rhetoric and develop basic persuasive skills. The curriculum then builds from that foundation.

Nelson compares this level of direction favorably with what he found at Penn (curricular disorder), and with what one finds at Brown (very few requirements) or Columbia (a “great books” core curriculum). As Minerva students advance, they choose one of five majors: arts and humanities, social sciences, computational sciences, natural sciences, or business.

Snapfish sold for $300 million to Hewlett-Packard in 2005, and Nelson made enough to fund two years of planning for his dream project. He is prone to bombastic pronouncements about Minerva, making broad claims about the state of higher education that are at times insightful and at times speculative at best. He speaks at many conferences, unsettling academic administrators less radical than he is by blithely dismissing long-standing practices. “Your cash cow is the lecture, and the lecture is over,” he told a gathering of deans. “The lecture model … will be obliterated.”

In academic circles, where overt competition between institutions is a serious breach of etiquette, Nelson is a bracing presence. (Imagine the president of Columbia telling the assembled presidents of other Ivy League schools, as Nelson sometimes tells his competitors, “Our goal is not to put you out of business; it is to lead you. It is to show you that there is a better way to do what you are doing, and for you to follow us.”)

The other taboo Nelson ignores is acknowledgment of profit motive. “For-profit in higher education equates to evil,” Nelson told me, noting that most for-profit colleges are indeed the sort of disreputable degree mills that wallpaper the Web with banner ads. “As if nonprofits aren’t money-driven!” he howled. “They’re just corporations that dodge their taxes.” (See “The Law-School Scam.”)

Minerva is built to make money, but Nelson insists that its motives will align with student interests. As evidence, Nelson points to the fact that the school will eschew all federal funding, to which he attributes much of the runaway cost of universities. The compliance cost of taking federal financial aid is about $1,000 per student—a tenth of Minerva’s tuition—and the aid wouldn’t be of any use to the majority of Minerva’s students, who will likely come from overseas.

Subsidies, Nelson says, encourage universities to enroll even students who aren’t likely to thrive, and to raise tuition, since federal money is pegged to costs. These effects pervade higher education, he says, but they have nothing to do with teaching students. He believes Minerva would end up hungering after federal money, too, if it ever allowed itself to be tempted. Instead, like Ulysses, it will tie itself to the mast and work with private-sector funding only. “If you put a drug”—federal funds—“into a system, the system changes itself to fit the drug. If [Minerva] took money from the government, in 20 years we’d be majority American, with substantially higher tuition. And as much as you try to create barriers, if you don’t structure it to be mission-oriented, that’s the way it will evolve.”

When talking about Minerva’s future, Nelson says he thinks in terms of the life spans of universities—hundreds of years as opposed to the decades of typical corporate time horizons. Minerva’s very founding is a rare event. “We are now building an institution that has not been attempted in over 100 years, since the founding of Rice”—the last four-year liberal-arts-based research institution founded in this country. It opened in 1912 and now charges $53,966 a year.

So far, Minerva has hired its deans, who will teach all the courses for this inaugural class. It will hire rank-and-file faculty later in the year. One of Minerva’s main strategies is to lure a few prominent scholars from existing institutions. Other “new” universities, especially fantastically wealthy ones like King Abdullah University of Science and Technology, in Saudi Arabia, have attempted a similar strategy—at times with an almost cargocult-like confidence that filling their labs and offices with big-shot professors will turn the institutions themselves into important players.

Among the bigger shots hired by Minerva is Eric Bonabeau, the dean of computational sciences, who taught the seminar I participated in. Bonabeau, a physicist who has worked in academia and in business, studies the mathematics of swarming behavior (of bees, fish, robots), and his research helped inspire Michael Crichton’s terrible thriller Prey. Diane Halpern, a prominent psychologist, signed on this year as the dean of social sciences.

Minerva’s first major hire, Stephen M. Kosslyn, is a man I met in the fall of 1999, when I went to have my head examined. Kosslyn taught cognitive psychology and neuroscience for 32 years at Harvard, and during my undergraduate years I visited his lab and earned a few dollars here and there as one of his guinea pigs. The studies usually involved sticking my head in an fMRI machine so he and his researchers could record activity in my brain and observe which parts fired when.

Around that time, Kosslyn’s lab made news because it began to show how “mental imagery”—the experience of seeing things in your mind’s eye—really works. (One study involved putting volunteers into fMRI machines and asking them to hold an image of a cat in their head for as long as possible. You can try this exercise now. If you’re especially good at concentrating, the cat might vanish in a matter of a few seconds, as soon as your brain—distractible as a puppy—comes up with another object of attention.) Kosslyn served as Harvard’s dean of social sciences from 2008 to 2010, then spent two years at Stanford as the director of its Center for Advanced Study in the Behavioral Sciences. In 2013, after a few months of contract work for Minerva, he resigned from Stanford and joined Minerva as its founding dean.

Kosslyn speaks softly and slowly, with little emotional affect. Bald and bearded, he has an owlish stare, and at times during my recent conversations with him, he seemed to be scanning my brain with his eyes. For purposes of illustration (and perhaps also amusement), he will ask you to perform some cognitive task, then wait patiently while you do it—explain a concept, say, or come up with an argument—before telling you matter-of-factly what your mind just did. When talking with him, you often feel as though your brain is a machine, and his job is to know how it works better than it knows itself.

He spent much of his first year at Minerva surveying the literature on education and the psychology of learning. “We have numerous sound, reproducible experiments that tell us how people learn, and what teachers can do to improve learning.” Some of the studies are ancient, by the standards of scientific research—and yet their lessons are almost wholly ignored.

For example, he points to a 1972 study by Fergus I. M. Craik and Robert S. Lockhart in The Journal of Verbal Learning and Verbal Behavior, which shows that memory of material is enhanced by “deep” cognitive tasks. In an educational context, such tasks would include working with material, applying it, arguing about it (rote memorization is insufficient). The finding is hardly revolutionary, but applying it systematically in the classroom is. Similarly, research shows that having a pop quiz at the beginning of a class and (if the students are warned in advance) another one at a random moment later in the class greatly increases the durability of what is learned. Likewise, if you ask a student to explain a concept she has been studying, the very act of articulating it seems to lodge it in her memory. Forcing students to guess the answer to a problem, and to discuss their answers in small groups, seems to make them understand the problem better—even if they guess wrong.

Kosslyn has begun publishing his research on the science of learning. His most recent co-authored article, in Psychological Science in the Public Interest, argues (against conventional wisdom) that the traditional concept of “cognitive styles”—visual versus aural learners, those who learn by doing versus those who learn by studying—is muddled and wrong.

The pedagogical best practices Kosslyn has identified have been programmed into the Minerva platform so that they are easy for professors to apply. They are not only easy, in fact, but also compulsory, and professors will be trained intensively in how to use the platform.

This approach does have its efficiencies. In a normal class, a pop quiz might involve taking out paper and pencils, not to mention eye-rolls from students. On the Minerva platform, quizzes—often a single multiple-choice question—are over and done in a matter of seconds, with students’ answers immediately logged and analyzed. Professors are able to sort students instantly, and by many metrics, for small-group work—perhaps pairing poets with business majors, to expose students who are weak in a particular class to the thought processes of their stronger peers. Some claim that education is an art and a science. Nelson has disputed this: “It’s a science and a science.”

Nelson likes to compare this approach to traditional seminars. He says he spoke to a prominent university president—he wouldn’t say which one—early in the planning of Minerva, and he found the man’s view of education, in a word, faith-based. “He said the reason elite university education was so great was because you take an expert in the subject, plus a bunch of smart kids, you put them in a room and apply pressure—and magic happens,” Nelson told me, leaning portentously on that word. “That was his analysis. They’re trying to sell magic! Something that happens by accident! It sure didn’t happen when I was an undergrad.”

To Kosslyn, building effective teaching techniques directly into the platform gives Minerva a huge advantage. “Typically, the way a professor learns to teach is completely haphazard,” he says. “One day the person is a graduate student, and the next day, a professor standing up giving a lecture, with almost no training.” Lectures, Kosslyn says, are pedagogically unsound, although for universities looking to trim budgets they are at least cost-effective, with one employee for dozens or hundreds of tuition-paying students. “A great way to teach,” Kosslyn says drily, “but a terrible way to learn.”

I asked him whether, at Harvard and Stanford, he attempted to apply any of the lessons of psychology in the classroom. He told me he could have alerted colleagues to best practices, but they most likely would have ignored them. “The classroom time is theirs, and it is sacrosanct,” he says. The very thought that he might be able to impose his own order on it was laughable. Professors, especially tenured ones at places like Harvard, answer to nobody.

It occurred to me that Kosslyn was living the dream of every university administrator who has watched professors mulishly defy even the most reasonable directives. Kosslyn had powers literally no one at Harvard—even the president—had. He could tell people what to do, and they had to do it.

There were moments, during my various conversations with Kosslyn and Nelson, when I found I couldn’t wait for Minerva’s wrecking ball to demolish the ivory tower. The American college system is a frustrating thing—and I say this as someone who was a satisfied customer of two undergraduate institutions, Deep Springs College (an obscure but selective college in the high desert of California) and Harvard. At Deep Springs, my classes rarely exceeded five students. At Harvard, I went to many excellent lectures and took only one class with fewer than 10 students. I didn’t sleepwalk or drink my way through either school, and the education I received was well worth the $16,000 a year my parents paid, after scholarships.

But the Minerva seminar did bring back memories of many a pointless, formless discussion or lecture, and it began to seem obvious that if Harvard had approached teaching with a little more care, it could have improved the seminars and replaced the worst lectures with something else.

When Eric Bonabeau assigned the reading for his class on induction, he barely bothered to tell us what induction was, or how it related to North Atlantic cod. When I asked him afterward about his decision not to spend a session introducing the concept, he said the Web had plenty of tutorials about induction, and any Minerva student ought to be able to learn the basics on her own time, in her own way. Seminars are for advanced discussion. And, of course, he was right.

Minerva’s model, Nelson says, will flourish in part because it will exploit free online content, rather than trying to compete with it, as traditional universities do. A student who wants an introductory economics course can turn to Coursera or Khan Academy. “We are a university, and a MOOC is a version of publishing,” Nelson explains. “The reason we can get away with the pedagogical model we have is because MOOCs exist. The MOOCs will eventually make lectures obsolete.”

Indeed, the more I looked into Minerva and its operations, the more I started to think that certain functions of universities have simply become less relevant as information has become more ubiquitous. Just as learning to read in Latin was essential before books became widely available in other languages, gathering students in places where they could attend lectures in person was once a necessary part of higher education. But by now books are abundant, and so are serviceable online lectures by knowledgeable experts.

On the other hand, no one yet knows whether reducing a university to a smooth-running pedagogical machine will continue to allow scholarship to thrive—or whether it will simply put universities out of business, replace scholar-teachers with just teachers, and retard a whole generation of research. At any great university, there are faculty who are terrible at teaching but whose work drives their field forward with greater momentum than the research of their classroom-competent colleagues. Will there be a place for such people at Minerva—or anywhere, if Minerva succeeds?

Last spring, when universities began mailing out acceptance letters and parents all over the country shuddered as the reality of tuition bills became more concrete, Minerva sent 69 offers. Thirty-three students decided to enroll, a typical percentage for a liberal-arts school. Nelson told me Minerva would admit students without regard for diversity or balance of gender.

Applicants to Minerva take a battery of online quizzes, including spatial-reasoning tests of the sort one might find on an IQ test. SATs are not considered, because affluent students can boost their scores by hiring tutors. (“They’re a good way of determining how rich a student is,” Nelson says.) If students perform well enough, Minerva interviews them over Skype and makes them write a short essay during the interview, to ensure that they aren’t paying a ghost writer. “The top 30 applicants get in,” he told me back in February, slicing his hand through the air to mark the cutoff point. For more than three years, he had been proselytizing worldwide, speaking to highschool students in California and Qatar and Brazil. In May, he and the Minerva deans made the final chop.

Of the students who enrolled, slightly less than 20 percent are American*—a percentage much higher than anticipated. (Nelson ultimately expects as many as 90 percent of the students to come from overseas.) Perhaps not surprisingly, the students come disproportionately from unconventional backgrounds— nearly one-tenth are from United World Colleges, the chain of cosmopolitan hippie high schools that brings together students from around the globe in places like Wales, Singapore, and New Mexico.

In an oddly controlling move for a university, Minerva asked admitted students to run requests for media interviews by its public-relations department. But the university gave me the names of three students willing to speak.

When I got through to Ian Van Buskirk of Marietta, Georgia, he was eager to tell me about a dugout canoe that he had recently carved out of a two-ton oak log, using only an ax, an adze, and a chisel, and that he planned to take on a maiden voyage in the hour after our conversation. He told me he would have attended Duke University if Minerva hadn’t come calling, but he said it wasn’t a particularly difficult decision, even though Minerva lacks the prestige and 176-year history of Duke. “There’s no reputation out there,” he told me. “But that means we get to make the reputation ourselves. I’m creating it now, while I’m talking to you.”

Minerva had let him try out the same online platform I did, and Van Buskirk singled out the “level of interaction and intensity” as a reason for attending. “It took deep concentration,” he said. “It’s not some lecture class where you can just click ‘record’ on your tape.” He said the focus required was similar to the mind-set he’d needed when he made his first hacks into his oak log, which could have cracked, rendering it useless.

Another student, Shane Dabor, of the small city of Brantford, Ontario, had planned to attend Canada’s University of Waterloo or the University of Toronto. But his experiences with online learning and a series of internships had led him to conclude that traditional universities were not for him. “I already had lots of friends at university who weren’t learning anything,” he says. “Both options seemed like a wager, and I chose this one.”

A young Palestinian woman, Rana Abu Diab, of Silwan, in East Jerusalem, described how she had learned English through movies and books (a translation of the Norwegian philosophical novel Sophie’s World was a particular favorite). “If I had relied on my school, I would not be able to have a two-minute conversation,” she told me in fluent English. During a year studying media at Birzeit University, in Ramallah, she heard about Minerva and decided to scrap her other academic plans and focus on applying there. For her, the ability to study overseas on multiple continents, and get an American-style liberalarts education in the process, was irresistible. “I want to explore everything and learn everything,” she says. “And that’s what Minerva is offering: an experience that lets you live multiple lives and learn not just your concentration but how to think.” Minerva admitted her, and, like a third of her classmates in the founding class, she received a supplemental scholarship, which she could use to pay for her computer and health insurance.

Two students told me that they had felt a little trepidation, and a need to convince themselves or their parents that Minerva wasn’t just a moneymaking scheme. Minerva had an open house weekend for admitted students, and (perhaps ironically) the in-person interactions with Minerva faculty and staff helped assure them that the university was legit. The students all now say they’re confident in Minerva—although of course they can leave whenever they like, with little lost but time.

Some people consider universities sacred places, and they might even see professors’ freedom to be the fallible sovereigns of their own classrooms as a necessary part of what makes a university special. To these romantics, universities are havens from a world dominated by orthodoxy, money, and quotidian concerns. Professors get to think independently, and students come away molded by the total experience—classes, social life, extracurriculars—that the university provides. We spend the rest of our lives chasing mates, money, and jobs, but at university we enjoy the liberty to indulge aimless curiosity in subjects we know nothing about, for purposes unrelated to efficiency or practicality.

Minerva is too young to have attracted zealous naysayers, but it’s safe to assume that the people with this disposition toward the university experience are least likely to be enthusiastic about Minerva and other attempts to revolutionize education through technical innovation. MOOCs are beloved by those too poor for a traditional university, as well as those who like to dabble, and those who like to learn in their pajamas. And MOOCs are not to be knocked: for a precocious Malawian peasant girl who learns math through free lessons from Khan Academy, the new Web resources can change her life. But the dropout rate for online classes is about 95 percent, and they skew strongly toward quantitative disciplines, particularly computer science, and toward privileged male students. As Nelson is fond of pointing out, however, MOOCs will continue to get better, until eventually no one will pay Duke or Johns Hopkins for the possibility of a good lecture, when Coursera offers a reliably great one, with hundreds of thousands of five-star ratings, for free.

The question remains as to whether Minerva can provide what traditional universities offer now. Kosslyn’s project of efficiently cramming learning into students’ brains is preferable to failing to cram in anything at all. And it is designed to convey not just information, as most MOOCs seem to, but whole mental tool kits that help students become morethoughtful citizens. But defenders of the traditional university see efficiency as a false idol.

“Like other things that are going on now in higher ed, Minerva brings us back to first principles,” says Harry R. Lewis, a computer-science professor who was the dean of Harvard’s undergraduate college from 1995 to 2003. What, he asks, does it mean to be educated? Perhaps the process of education is a profound one, involving all sorts of leaps in maturity that do not show up on a Kosslyn-style test of pedagogical efficiency. “I’m sure there’s a market for people who want to be more efficiently educated,” Lewis says. “But how do you improve the efficiency of growing up?”

He warns that online-education innovations tend to be oversold. “They seem to want to re-create the School of Athens in every little hamlet on the prairie—and maybe they’ll do that,” he told me. “But part of the process of education happens not just through good pedagogy but by having students in places where they see the scholars working and plying their trades.”

He calls the “hydraulic metaphor” of education—the idea that the main task of education is to increase the flow of knowledge into the student—an “old fallacy.” As Lewis explains, “Plutarch said the mind is not a vessel to be filled but a fire to be lit. Part of my worry about these Internet start-ups is that it’s not clear they’ll be any good at the fire-lighting part.”

In February, at a university-administrator conference at a Hyatt in downtown San Francisco, Ben Nelson spoke to a plenary session of business-school deans from around the world. Daphne Koller of Coursera sat opposite him onstage, and they calmly but assuredly described what sounded to me like the destruction of the very schools where their audience members worked. Nelson wore a bored smirk while an introductory video played, advertising the next year’s version of the same conference. To a pair of educational entrepreneurs boasting the low price of their new projects, the slickly produced video must have looked like just another expensive barnacle on the hull of higher education.

“Content is about to become free and ubiquitous,” Koller said, an especially worrying comment for deans who still thought the job of their universities was to teach “content.” The institutions “that are going to survive are the ones that reimagine themselves in this new world.”

Nelson ticked off the advantages he had over legacy institutions: the spryness of a well-funded start-up, a student body from all over the world, and deals for faculty (they get to keep their own intellectual property, rather than having to hand over lucrative patents to, say, Stanford) that are likely to make Minerva attractive.

Yet in some ways, the worst possible outcome would be for U.S. higher education to accept Minerva as its model and dismantle the old universities before anyone can really be sure that it offers a satisfactory replacement. During my conversations with the three Minerva students, I wanted to ask whether they were confident Minerva would give them all the wonderful intangibles and productive diversions that Harry Lewis found so important. But then I remembered what I was like as a teenager headed off to college, so ignorant of what college was and what it could be, and so reliant on the college itself to provide what I’d need in order to get a good education. These three young students were more resourceful than I was, and probably more deliberate in their choice of college. But they were newcomers to higher education, and asking them whether their fledgling alma mater could provide these things seemed akin to asking the passengers on the Mayflower how they liked America as soon as their feet touched Plymouth Rock.

Lewis is certainly right when he says that Minerva challenges the field to return to first principles. But of course the conclusions one reaches might not be flattering to traditional colleges. One possibility is that Minerva will fail because a college degree, for all the high-minded talk of liberal education— of lighting fires and raising thoughtful citizens—is really just a credential, or an entry point to an old-boys network that gets you your first job and your first lunch with the machers at your alumni club. Minerva has no alumni club, and if it fails for this reason, it will look naive and idealistic, a bet on the inherent value of education in a world where cynicism gets better odds.

In another sense, it’s difficult to imagine Minerva failing altogether: it will offer something that resembles a liberal education to large segments of the Earth’s population who currently have to choose between the long-shot possibility of getting into a traditional U.S. school, and the more narrowly career-oriented education available in their home country. That population might give Minerva a steady flow of tuition-paying warm bodies even if U.S. higher education ignores it completely. It could plausibly become the Amherst of the world beyond the borders of the United States.

These are not, however, the terms by which Ben Nelson defines success. To him, the brass ring is for Minerva to force itself on the consciousness of the Yales and Swarthmores and “lead” American universities into a new era. More modestly, we can expect Minerva to force some universities to justify what previously could be waved off with mentions of “magic” and a puff of smoke. Its seminar platform will challenge professors to stop thinking they’re using technology just because they lecture with PowerPoint.

It seems only remotely possible that in 20 years Minerva could have more students enrolled than Ohio State will. But it is almost a certainty that the classrooms of elite universities will in that time have come to look more and more like Minerva classrooms, with professors and students increasingly separated geographically, mediated through technology that alters the nature of the student-teacher relationship. Even if Minerva turns out not to be the venture that upends American higher education, other innovators will crop up in its wake to address the exact weaknesses Nelson now attacks. The idea that college will in two decades look exactly as it does today increasingly sounds like the forlorn, fingers-crossed hope of a higher-education dinosaur that retirement comes before extinction.

At the university-administrator conference where Nelson spoke in February, I sat at a table with an affable bunch of deans from Australia and the United States. They listened attentively, first with interest and then with growing alarm. Toward the end of the conversation, the sponsoring organization’s president asked the panelists what they expected to be said at a similar event in 2017, on the same topic of innovative online education. (“Assuming we’re still in business,” a dean near me whispered to no one in particular.)

Daphne Koller said she expected Coursera to have grown in offerings into a university the size of a large state school—after having started from scratch in 2012. Even before Nelson gave his answer, I noticed some audience members uncomfortably shifting their weight. The stench of fear made him bold.

“I predict that in three years, four or five or seven or eight of you will be onstage here, presenting your preliminary findings of your first year of a radical new conception of your undergraduate [or] graduate program … And the rest of you will look at two or three of those versions and say, ‘Uh-oh.’ ” This was meant as a joke, but hardly anyone laughed.

Graeme Wood is a contributing editor at The Atlantic. His personal site is gcaw.net.

They’re Watching You at Work – by Don Peck

What happens when Big Data meets human resources? The emerging practice of “people analytics” is already transforming how employers hire, fire, and promote.

Peter Yang

In 2003, thanks to Michael Lewis and his best seller Moneyball, the general manager of the Oakland A’s, Billy Beane, became a star. The previous year, Beane had turned his back on his scouts and had instead entrusted player-acquisition decisions to mathematical models developed by a young, Harvard-trained statistical wizard on his staff. What happened next has become baseball lore. The A’s, a small-market team with a paltry budget, ripped off the longest winning streak in American League history and rolled up 103 wins for the season. Only the mighty Yankees, who had spent three times as much on player salaries, won as many games. The team’s success, in turn, launched a revolution. In the years that followed, team after team began to use detailed predictive models to assess players’ potential and monetary value, and the early adopters, by and large, gained a measurable competitive edge over their more hidebound peers.

That’s the story as most of us know it. But it is incomplete. What would seem at first glance to be nothing but a memorable tale about baseball may turn out to be the opening chapter of a much larger story about jobs. Predictive statistical analysis, harnessed to big data, appears poised to alter the way millions of people are hired and assessed.

Yes, unavoidably, big data. As a piece of business jargon, and even more so as an invocation of coming disruption, the term has quickly grown tiresome. But there is no denying the vast increase in the range and depth of information that’s routinely captured about how we behave, and the new kinds of analysis that this enables. By one estimate, more than 98 percent of the world’s information is now stored digitally, and the volume of that data has quadrupled since 2007. Ordinary people at work and at home generate much of this data, by sending e-mails, browsing the Internet, using social media, working on crowd-sourced projects, and more—and in doing so they have unwittingly helped launch a grand new societal project. “We are in the midst of a great infrastructure project that in some ways rivals those of the past, from Roman aqueducts to the Enlightenment’s Encyclopédie,” write Viktor Mayer-Schönberger and Kenneth Cukier in their recent book, Big Data: A Revolution That Will Transform How We Live, Work, and Think. “The project is datafication. Like those other infrastructural advances, it will bring about fundamental changes to society.”

Some of the changes are well known, and already upon us. Algorithms that predict stock-price movements have transformed Wall Street. Algorithms that chomp through our Web histories have transformed marketing. Until quite recently, however, few people seemed to believe this data-driven approach might apply broadly to the labor market.

But it now does. According to John Hausknecht, a professor at Cornell’s school of industrial and labor relations, in recent years the economy has witnessed a “huge surge in demand for workforce-analytics roles.” Hausknecht’s own program is rapidly revising its curriculum to keep pace. You can now find dedicated analytics teams in the human-resources departments of not only huge corporations such as Google, HP, Intel, General Motors, and Procter & Gamble, to name just a few, but also companies like McKee Foods, the Tennessee-based maker of Little Debbie snack cakes. Even Billy Beane is getting into the game. Last year he appeared at a large conference for corporate HR executives in Austin, Texas, where he reportedly stole the show with a talk titled “The Moneyball Approach to Talent Management.” Ever since, that headline, with minor modifications, has been plastered all over the HR trade press.

The application of predictive analytics to people’s careers—an emerging field sometimes called “people analytics”—is enormously challenging, not to mention ethically fraught. And it can’t help but feel a little creepy. It requires the creation of a vastly larger box score of human performance than one would ever encounter in the sports pages, or that has ever been dreamed up before. To some degree, the endeavor touches on the deepest of human mysteries: how we grow, whether we flourish, what we become. Most companies are just beginning to explore the possibilities. But make no mistake: during the next five to 10 years, new models will be created, and new experiments run, on a very large scale. Will this be a good development or a bad one—for the economy, for the shapes of our careers, for our spirit and self-worth? Earlier this year, I decided to find out.

Ever since we’ve had companies, we’ve had managers trying to figure out which people are best suited to working for them. The techniques have varied considerably. Near the turn of the 20th century, one manufacturer in Philadelphia made hiring decisions by having its foremen stand in front of the factory and toss apples into the surrounding scrum of job-seekers. Those quick enough to catch the apples and strong enough to keep them were put to work.

In those same times, a different (and less bloody) Darwinian process governed the selection of executives. Whole industries were being consolidated by rising giants like U.S. Steel, DuPont, and GM. Weak competitors were simply steamrolled, but the stronger ones were bought up, and their founders typically were offered high-level jobs within the behemoth. The approach worked pretty well. As Peter Cappelli, a professor at the Wharton School, has written, “Nothing in the science of prediction and selection beats observing actual performance in an equivalent role.”

By the end of World War II, however, American corporations were facing severe talent shortages. Their senior executives were growing old, and a dearth of hiring from the Depression through the war had resulted in a shortfall of able, well-trained managers. Finding people who had the potential to rise quickly through the ranks became an overriding preoccupation of American businesses. They began to devise a formal hiring-and-management system based in part on new studies of human behavior, and in part on military techniques developed during both world wars, when huge mobilization efforts and mass casualties created the need to get the right people into the right roles as efficiently as possible. By the 1950s, it was not unusual for companies to spend days with young applicants for professional jobs, conducting a battery of tests, all with an eye toward corner-office potential. “P&G picks its executive crop right out of college,” BusinessWeeknoted in 1950, in the unmistakable patter of an age besotted with technocratic possibility. IQ tests, math tests, vocabulary tests, professional-aptitude tests, vocational-interest questionnaires, Rorschach tests, a host of other personality assessments, and even medical exams (who, after all, would want to hire a man who might die before the company’s investment in him was fully realized?)—all were used regularly by large companies in their quest to make the right hire.

The process didn’t end when somebody started work, either. In his classic 1956 cultural critique, The Organization Man, the business journalist William Whyte reported that about a quarter of the country’s corporations were using similar tests to evaluate managers and junior executives, usually to assess whether they were ready for bigger roles. “Should Jones be promoted or put on the shelf?” he wrote. “Once, the man’s superiors would have had to thresh this out among themselves; now they can check with psychologists to see what the tests say.”

Remarkably, this regime, so widespread in corporate America at mid-century, had almost disappeared by 1990. “I think an HR person from the late 1970s would be stunned to see how casually companies hire now,” Peter Cappelli told me—the days of testing replaced by a handful of ad hoc interviews, with the questions dreamed up on the fly. Many factors explain the change, he said, and then he ticked off a number of them: Increased job-switching has made it less important and less economical for companies to test so thoroughly. A heightened focus on short-term financial results has led to deep cuts in corporate functions that bear fruit only in the long term. The Civil Rights Act of 1964, which exposed companies to legal liability for discriminatory hiring practices, has made HR departments wary of any broadly applied and clearly scored test that might later be shown to be systematically biased. Instead, companies came to favor the more informal qualitative hiring practices that are still largely in place today.

But companies abandoned their hard-edged practices for another important reason: many of their methods of evaluation turned out not to be very scientific. Some were based on untested psychological theories. Others were originally designed to assess mental illness, and revealed nothing more than where subjects fell on a “normal” distribution of responses—which in some cases had been determined by testing a relatively small, unrepresentative group of people, such as college freshmen. When William Whyte administered a battery of tests to a group of corporate presidents, he found that not one of them scored in the “acceptable” range for hiring. Such assessments, he concluded, measured not potential but simply conformity. Some of them were highly intrusive, too, asking questions about personal habits, for instance, or parental affection. Unsurprisingly, subjects didn’t like being so impersonally poked and prodded (sometimes literally).

For all these reasons and more, the idea that hiring was a science fell out of favor. But now it’s coming back, thanks to new technologies and methods of analysis that are cheaper, faster, and much-wider-ranging than what we had before. For better or worse, a new era of technocratic possibility has begun.

Consider Knack, a tiny start-up based in Silicon Valley. Knack makes app-based video games, among them Dungeon Scrawl, a quest game requiring the player to navigate a maze and solve puzzles, and Wasabi Waiter, which involves delivering the right sushi to the right customer at an increasingly crowded happy hour. These games aren’t just for play: they’ve been designed by a team of neuroscientists, psychologists, and data scientists to suss out human potential. Play one of them for just 20 minutes, says Guy Halfteck, Knack’s founder, and you’ll generate several megabytes of data, exponentially more than what’s collected by the SAT or a personality test. How long you hesitate before taking every action, the sequence of actions you take, how you solve problems—all of these factors and many more are logged as you play, and then are used to analyze your creativity, your persistence, your capacity to learn quickly from mistakes, your ability to prioritize, and even your social intelligence and personality. The end result, Halfteck says, is a high-resolution portrait of your psyche and intellect, and an assessment of your potential as a leader or an innovator.

When Hans Haringa heard about Knack, he was skeptical but intrigued. Haringa works for the petroleum giant Royal Dutch Shell—by revenue, the world’s largest company last year. For seven years he’s served as an executive in the company’s GameChanger unit: a 12-person team that for nearly two decades has had an outsize impact on the company’s direction and performance. The unit’s job is to identify potentially disruptive business ideas. Haringa and his team solicit ideas promiscuously from inside and outside the company, and then play the role of venture capitalists, vetting each idea, meeting with its proponents, dispensing modest seed funding to a few promising candidates, and monitoring their progress. They have a good record of picking winners, Haringa told me, but identifying ideas with promise has proved to be extremely difficult and time-consuming. The process typically takes more than two years, and less than 10 percent of the ideas proposed to the unit actually make it into general research and development.

When he heard about Knack, Haringa thought he might have found a shortcut. What if Knack could help him assess the people proposing all these ideas, so that he and his team could focus only on those whose ideas genuinely deserved close attention? Haringa reached out, and eventually ran an experiment with the company’s help.

Over the years, the GameChanger team had kept a database of all the ideas it had received, recording how far each had advanced. Haringa asked all the idea contributors he could track down (about 1,400 in total) to play Dungeon Scrawl and Wasabi Waiter, and told Knack how well three-quarters of those people had done as idea generators. (Did they get initial funding? A second round? Did their ideas make it all the way?) He did this so that Knack’s staff could develop game-play profiles of the strong innovators relative to the weak ones. Finally, he had Knack analyze the game-play of the remaining quarter of the idea generators, and asked the company to guess whose ideas had turned out to be best.

When the results came back, Haringa recalled, his heart began to beat a little faster. Without ever seeing the ideas, without meeting or interviewing the people who’d proposed them, without knowing their title or background or academic pedigree, Knack’s algorithm had identified the people whose ideas had panned out. The top 10 percent of the idea generators as predicted by Knack were in fact those who’d gone furthest in the process. Knack identified six broad factors as especially characteristic of those whose ideas would succeed at Shell: “mind wandering” (or the tendency to follow interesting, unexpected offshoots of the main task at hand, to see where they lead), social intelligence, “goal-orientation fluency,” implicit learning, task-switching ability, and conscientiousness. Haringa told me that this profile dovetails with his impression of a successful innovator. “You need to be disciplined,” he said, but “at all times you must have your mind open to see the other possibilities and opportunities.”

What Knack is doing, Haringa told me, “is almost like a paradigm shift.” It offers a way for his GameChanger unit to avoid wasting time on the 80 people out of 100—nearly all of whom look smart, well-trained, and plausible on paper—whose ideas just aren’t likely to work out. If he and his colleagues were no longer mired in evaluating “the hopeless folks,” as he put it to me, they could solicit ideas even more widely than they do today and devote much more careful attention to the 20 people out of 100 whose ideas have the most merit.

Haringa is now trying to persuade his colleagues in the GameChanger unit to use Knack’s games as an assessment tool. But he’s also thinking well beyond just his own little part of Shell. He has encouraged the company’s HR executives to think about applying the games to the recruitment and evaluation of all professional workers. Shell goes to extremes to try to make itself the world’s most innovative energy company, he told me, so shouldn’t it apply that spirit to developing its own “human dimension”?

“It is the whole man The Organization wants,” William Whyte wrote back in 1956, when describing the ambit of the employee evaluations then in fashion. Aptitude, skills, personal history, psychological stability, discretion, loyalty—companies at the time felt they had a need (and the right) to look into them all. That ambit is expanding once again, and this is undeniably unsettling. Should the ideas of scientists be dismissed because of the way they play a game? Should job candidates be ranked by what their Web habits say about them? Should the “data signature” of natural leaders play a role in promotion? These are all live questions today, and they prompt heavy concerns: that we will cede one of the most subtle and human of skills, the evaluation of the gifts and promise of other people, to machines; that the models will get it wrong; that some people will never get a shot in the new workforce.

It’s natural to worry about such things. But consider the alternative. A mountain of scholarly literature has shown that the intuitive way we now judge professional potential is rife with snap judgments and hidden biases, rooted in our upbringing or in deep neurological connections that doubtless served us well on the savanna but would seem to have less bearing on the world of work.

What really distinguishes CEOs from the rest of us, for instance? In 2010, three professors at Duke’s Fuqua School of Business asked roughly 2,000 people to look at a long series of photos. Some showed CEOs and some showed nonexecutives, and the participants didn’t know who was who. The participants were asked to rate the subjects according to how “competent” they looked. Among the study’s findings: CEOs look significantly more competent than non-CEOs; CEOs of large companies look significantly more competent than CEOs of small companies; and, all else being equal, the more competent a CEO looked, the fatter the paycheck he or she received in real life. And yet the authors found no relationship whatsoever between how competent a CEO looked and the financial performance of his or her company.

Examples of bias abound. Tall men get hired and promoted more frequently than short men, and make more money. Beautiful women get preferential treatment, too—unless their breasts are too large. According to a national survey by the Employment Law Alliance a few years ago, most American workers don’t believe attractive people in their firms are hired or promoted more frequently than unattractive people, but the evidence shows that they are, overwhelmingly so. Older workers, for their part, are thought to be more resistant to change and generally less competent than younger workers, even though plenty of research indicates that’s just not so. Workers who are too young or, more specifically, are part of the Millennial generation are tarred as entitled and unable to think outside the box.

“Some of our hiring managers don’t even want to interview anymore”—they just want to hire the people with the highest scores.

Malcolm Gladwell recounts a classic example inBlink. Back in the 1970s and ’80s, most professional orchestras transitioned one by one to “blind” auditions, in which each musician seeking a job performed from behind a screen. The move was made in part to stop conductors from favoring former students, which it did. But it also produced another result: the proportion of women winning spots in the most-prestigious orchestras shot up fivefold, notably when they played instruments typically identified closely with men. Gladwell tells the memorable story of Julie Landsman, who, at the time of his book’s publication, in 2005, was playing principal French horn for the Metropolitan Opera, in New York. When she’d finished her blind audition for that role, years earlier, she knew immediately that she’d won. Her last note was so true, and she held it so long, that she heard delighted peals of laughter break out among the evaluators on the other side of the screen. But when she came out to greet them, she heard a gasp. Landsman had played with the Met before, but only as a substitute. The evaluators knew her, yet only when they weren’t aware of her gender—only, that is, when they were forced to make not a personal evaluation but an impersonal one—could they hear how brilliantly she played.

We may like to think that society has become more enlightened since those days, and in many ways it has, but our biases are mostly unconscious, and they can run surprisingly deep. Consider race. For a 2004 study called “Are Emily and Greg More Employable Than Lakisha and Jamal?,” the economists Sendhil Mullainathan and Marianne Bertrand put white-sounding names (Emily Walsh, Greg Baker) or black-sounding names (Lakisha Washington, Jamal Jones) on similar fictitious résumés, which they then sent out to a variety of companies in Boston and Chicago. To get the same number of callbacks, they learned, they needed to either send out half again as many résumés with black names as those with white names, or add eight extra years of relevant work experience to the résumés with black names.

I talked with Mullainathan about the study. All of the hiring managers he and Bertrand had consulted while designing it, he said, told him confidently that Lakisha and Jamal would get called back more than Emily and Greg. Affirmative action guaranteed it, they said: recruiters were bending over backwards in their search for good black candidates. Despite making conscious efforts to find such candidates, however, these recruiters turned out to be excluding them unconsciously at every turn. After the study came out, a man named Jamal sent a thank-you note to Mullainathan, saying that he’d started using only his first initial on his résumé and was getting more interviews.

Perhaps the most widespread bias in hiring today cannot even be detected with the eye. In a recent survey of some 500 hiring managers, undertaken by the Corporate Executive Board, a research firm, 74 percent reported that their most recent hire had a personality “similar to mine.” Lauren Rivera, a sociologist at Northwestern, spent parts of the three years from 2006 to 2008 interviewing professionals from elite investment banks, consultancies, and law firms about how they recruited, interviewed, and evaluated candidates, and concluded that among the most important factors driving their hiring recommendations were—wait for it—shared leisure interests. “The best way I could describe it,” one attorney told her, “is like if you were going on a date. You kind of know when there’s a match.” Asked to choose the most-promising candidates from a sheaf of fake résumés Rivera had prepared, a manager at one particularly buttoned-down investment bank told her, “I’d have to pick Blake and Sarah. With his lacrosse and her squash, they’d really get along [with the people] on the trading floor.” Lacking “reliable predictors of future performance,” Rivera writes, “assessors purposefully used their own experiences as models of merit.” Former college athletes “typically prized participation in varsity sports above all other types of involvement.” People who’d majored in engineering gave engineers a leg up, believing they were better prepared.

Given this sort of clubby, insular thinking, it should come as no surprise that the prevailing system of hiring and management in this country involves a level of dysfunction that should be inconceivable in an economy as sophisticated as ours. Recent survey data collected by the Corporate Executive Board, for example, indicate that nearly a quarter of all new hires leave their company within a year of their start date, and that hiring managers wish they’d never extended an offer to one out of every five members on their team. A survey by Gallup this past June, meanwhile, found that only 30 percent of American workers felt a strong connection to their company and worked for it with passion. Fifty-two percent emerged as “not engaged” with their work, and another 18 percent as “actively disengaged,” meaning they were apt to undermine their company and co-workers, and shirk their duties whenever possible. These headline numbers are skewed a little by the attitudes of hourly workers, which tend to be worse, on average, than those of professional workers. But really, what further evidence do we need of the abysmal status quo?

Because the algorithmic assessment of workers’ potential is so new, not much hard data yet exist demonstrating its effectiveness. The arena in which it has been best proved, and where it is most widespread, is hourly work. Jobs at big-box retail stores and call centers, for example, warm the hearts of would-be corporate Billy Beanes: they’re pretty well standardized, they exist in huge numbers, they turn over quickly (it’s not unusual for call centers, for instance, to experience 50 percent turnover in a single year), and success can be clearly measured (through a combination of variables like sales, call productivity, customer-complaint resolution, and length of tenure). Big employers of hourly workers are also not shy about using psychological tests, partly in an effort to limit theft and absenteeism. In the late 1990s, as these assessments shifted from paper to digital formats and proliferated, data scientists started doing massive tests of what makes for a successful customer-support technician or salesperson. This has unquestionably improved the quality of the workers at many firms.

Teri Morse, the vice president for recruiting at Xerox Services, oversees hiring for the company’s 150 U.S. call and customer-care centers, which employ about 45,000 workers. When I spoke with her in July, she told me that as recently as 2010, Xerox had filled these positions through interviews and a few basic assessments conducted in the office—a typing test, for instance. Hiring managers would typically look for work experience in a similar role, but otherwise would just use their best judgment in evaluating candidates. In 2010, however, Xerox switched to an online evaluation that incorporates personality testing, cognitive-skill assessment, and multiple-choice questions about how the applicant would handle specific scenarios that he or she might encounter on the job. An algorithm behind the evaluation analyzes the responses, along with factual information gleaned from the candidate’s application, and spits out a color-coded rating: red (poor candidate), yellow (middling), or green (hire away). Those candidates who score best, I learned, tend to exhibit a creative but not overly inquisitive personality, and participate in at least one but not more than four social networks, among many other factors. (Previous experience, one of the few criteria that Xerox had explicitly screened for in the past, turns out to have no bearing on either productivity or retention. Distance between home and work, on the other hand, is strongly associated with employee engagement and retention.)

When Xerox started using the score in its hiring decisions, the quality of its hires immediately improved. The rate of attrition fell by 20 percent in the initial pilot period, and over time, the number of promotions rose. Xerox still interviews all candidates in person before deciding to hire them, Morse told me, but, she added, “We’re getting to the point where some of our hiring managers don’t even want to interview anymore”—they just want to hire the people with the highest scores.

The online test that Xerox uses was developed by a small but rapidly growing company based in San Francisco called Evolv. I spoke with Jim Meyerle, one of the company’s co‑founders, and David Ostberg, its vice president of workforce science, who described how modern techniques of gathering and analyzing data offer companies a sharp edge over basic human intuition when it comes to hiring. Gone are the days, Ostberg told me, when, say, a small survey of college students would be used to predict the statistical validity of an evaluation tool. “We’ve got a data set of 347,000 actual employees who have gone through these different types of assessments or tools,” he told me, “and now we have performance-outcome data, and we can split those and slice and dice by industry and location.”

Evolv’s tests allow companies to capture data about everybody who applies for work, and everybody who gets hired—a complete data set from which sample bias, long a major vexation for industrial-organization psychologists, simply disappears. The sheer number of observations that this approach makes possible allows Evolv to say with precision which attributes matter more to the success of retail-sales workers (decisiveness, spatial orientation, persuasiveness) or customer-service personnel at call centers (rapport-building). And the company can continually tweak its questions, or add new variables to its model, to seek out ever stronger correlates of success in any given job. For instance, the browser that applicants use to take the online test turns out to matter, especially for technical roles: some browsers are more functional than others, but it takes a measure of savvy and initiative to download them.

There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people. The distance an employee lives from work, for instance, is never factored into the score given each applicant, although it is reported to some clients. That’s because different neighborhoods and towns can have different racial profiles, which means that scoring distance from work could violate equal-employment-opportunity standards. Marital status? Motherhood? Church membership? “Stuff like that,” Meyerle said, “we just don’t touch”—at least not in the U.S., where the legal environment is strict. Meyerle told me that Evolv has looked into these sorts of factors in its work for clients abroad, and that some of them produce “startling results.” Citing client confidentiality, he wouldn’t say more.

Meyerle told me that what most excites him are the possibilities that arise from monitoring the entire life cycle of a worker at any given company. This is a task that Evolv now performs for Transcom, a company that provides outsourced customer-support, sales, and debt-collection services, and that employs some 29,000 workers globally. About two years ago, Transcom began working with Evolv to improve the quality and retention of its English-speaking workforce, and three-month attrition quickly fell by about 30 percent. Now the two companies are working together to marry pre-hire assessments to an increasing array of post-hire data: about not only performance and duration of service but also who trained the employees; who has managed them; whether they were promoted to a supervisory role, and how quickly; how they performed in that role; and why they eventually left.

The potential power of this data-rich approach is obvious. What begins with an online screening test for entry-level workers ends with the transformation of nearly every aspect of hiring, performance assessment, and management. In theory, this approach enables companies to fast-track workers for promotion based on their statistical profiles; to assess managers more scientifically; even to match workers and supervisors who are likely to perform well together, based on the mix of their competencies and personalities. Transcom plans to do all these things, as its data set grows ever richer. This is the real promise—or perhaps the hubris—of the new people analytics. Making better hires turns out to be not an end but just a beginning. Once all the data are in place, new vistas open up.

For a sense of what the future of people analytics may bring, I turned to Sandy Pentland, the director of the Human Dynamics Laboratory at MIT. In recent years, Pentland has pioneered the use of specialized electronic “badges” that transmit data about employees’ interactions as they go about their days. The badges capture all sorts of information about formal and informal conversations: their length; the tone of voice and gestures of the people involved; how much those people talk, listen, and interrupt; the degree to which they demonstrate empathy and extroversion; and more. Each badge generates about 100 data points a minute.

Pentland’s initial goal was to shed light on what differentiated successful teams from unsuccessful ones. As he described last year in the Harvard Business Review, he tried the badges out on about 2,500 people, in 21 different organizations, and learned a number of interesting lessons. About a third of team performance, he discovered, can usually be predicted merely by the number of face-to-face exchanges among team members. (Too many is as much of a problem as too few.) Using data gathered by the badges, he was able to predict which teams would win a business-plan contest, and which workers would (rightly) say they’d had a “productive” or “creative” day. Not only that, but he claimed that his researchers had discovered the “data signature” of natural leaders, whom he called “charismatic connectors” and all of whom, he reported, circulate actively, give their time democratically to others, engage in brief but energetic conversations, and listen at least as much as they talk. In a development that will surprise few readers, Pentland and his fellow researchers created a company, Sociometric Solutions, in 2010, to commercialize his badge technology.

Pentland told me that no business he knew of was yet using this sort of technology on a permanent basis. His own clients were using the badges as part of consulting projects designed to last only a few weeks. But he doesn’t see why longer-term use couldn’t be in the cards for the future, particularly as the technology gets cheaper. His group is developing apps to allow team members to view their own metrics more or less in real time, so that they can see, relative to the benchmarks of highly successful employees, whether they’re getting out of their offices enough, or listening enough, or spending enough time with people outside their own team.

Whether or not we all come to wear wireless lapel badges, Star Trek–style, plenty of other sources could easily serve as the basis of similar analysis. Torrents of data are routinely collected by American companies and now sit on corporate servers, or in the cloud, awaiting analysis. Bloomberg reportedly logs every keystroke of every employee, along with their comings and goings in the office. The Las Vegas casino Harrah’s tracks the smiles of the card dealers and waitstaff on the floor (its analytics team has quantified the impact of smiling on customer satisfaction). E‑mail, of course, presents an especially rich vein to be mined for insights about our productivity, our treatment of co-workers, our willingness to collaborate or lend a hand, our patterns of written language, and what those patterns reveal about our intelligence, social skills, and behavior. As technologies that analyze language become better and cheaper, companies will be able to run programs that automatically trawl through the e-mail traffic of their workforce, looking for phrases or communication patterns that can be statistically associated with various measures of success or failure in particular roles.

When I brought this subject up with Erik Brynjolfsson, a professor at MIT’s Sloane School of Management, he told me that he believes people analytics will ultimately have a vastly larger impact on the economy than the algorithms that now trade on Wall Street or figure out which ads to show us. He reminded me that we’ve witnessed this kind of transformation before in the history of management science. Near the turn of the 20th century, both Frederick Taylor and Henry Ford famously paced the factory floor with stopwatches, to improve worker efficiency. And at mid-century, there was that remarkable spread of data-driven assessment. But there’s an obvious and important difference between then and now, Brynjolfsson said. “The quantities of data that those earlier generations were working with,” he said, “were infinitesimal compared to what’s available now. There’s been a real sea change in the past five years, where the quantities have just grown so large—petabytes, exabytes, zetta—that you start to be able to do things you never could before.”

It’s in the inner workings of organizations, says Sendhil Mullainathan, the economist, where the most-dramatic benefits of people analytics are likely to show up. When we talked, Mullainathan expressed amazement at how little most creative and professional workers (himself included) know about what makes them effective or ineffective in the office. Most of us can’t even say with any certainty how long we’ve spent gathering information for a given project, or our pattern of information-gathering, never mind know which parts of the pattern should be reinforced, and which jettisoned. As Mullainathan put it, we don’t know our own “production function.”

The prospect of tracking that function through people analytics excites Mullainathan. He sees it not only as a boon to a business’s productivity and overall health but also as an important new tool that individual employees can use for self-improvement: a sort of radically expanded The 7 Habits of Highly Effective People, custom-written for each of us, or at least each type of job, in the workforce.

Perhaps the most exotic development in people analytics today is the creation of algorithms to assess the potential of all workers, across all companies, all the time.

This past summer, I sat in on a sales presentation by Gild, a company that uses people analytics to help other companies find software engineers. I didn’t have to travel far: Atlantic Media, the parent company of The Atlantic, was considering using Gild to find coders. (No sale was made, and there is no commercial relationship between the two firms.)

In a small conference room, we were shown a digital map of Northwest Washington, D.C., home to The Atlantic. Little red pins identified all the coders in the area who were proficient in the skills that an Atlantic Media job announcement listed as essential. Next to each pin was a number that ranked the quality of each coder on a scale of one to 100, based on the mix of skills Atlantic Media was looking for. (No one with a score above 75, we were told, had ever failed a coding test by a Gild client.) If we’d wished, we could have zoomed in to see how The Atlantic’s own coders scored.

The way Gild arrives at these scores is not simple. The company’s algorithms begin by scouring the Web for any and all open-source code, and for the coders who wrote it. They evaluate the code for its simplicity, elegance, documentation, and several other factors, including the frequency with which it’s been adopted by other programmers. For code that was written for paid projects, they look at completion times and other measures of productivity. Then they look at questions and answers on social forums such as Stack Overflow, a popular destination for programmers seeking advice on challenging projects. They consider how popular a given coder’s advice is, and how widely that advice ranges.

The algorithms go further still. They assess the way coders use language on social networks from LinkedIn to Twitter; the company has determined that certain phrases and words used in association with one another can distinguish expert programmers from less skilled ones. Gild knows these phrases and words are associated with good coding because it can correlate them with its evaluation of open-source code, and with the language and online behavior of programmers in good positions at prestigious companies.

Here’s the part that’s most interesting: having made those correlations, Gild can then score programmers who haven’t written open-source code at all, by analyzing the host of clues embedded in their online histories. They’re not all obvious, or easy to explain. Vivienne Ming, Gild’s chief scientist, told me that one solid predictor of strong coding is an affinity for a particular Japanese manga site.

Why would good coders (but not bad ones) be drawn to a particular manga site? By some mysterious alchemy, does reading a certain comic-book series improve one’s programming skills? “Obviously, it’s not a causal relationship,” Ming told me. But Gild does have 6 million programmers in its database, she said, and the correlation, even if inexplicable, is quite clear.

Gild treats this sort of information gingerly, Ming said. An affection for a Web site will be just one of dozens of variables in the company’s constantly evolving model, and a minor one at that; it merely “nudges” an applicant’s score upward, and only as long as the correlation persists. Some factors are transient, and the company’s computers are forever crunching the numbers, so the variables are always changing. The idea is to create a sort of pointillist portrait: even if a few variables turn out to be bogus, the overall picture, Ming believes, will be clearer and truer than what we could see on our own.

Gild’s CEO, Sheeroy Desai, told me he believes his company’s approach can be applied to any occupation characterized by large, active online communities, where people post and cite individual work, ask and answer professional questions, and get feedback on projects. Graphic design is one field that the company is now looking at, and many scientific, technical, and engineering roles might also fit the bill. Regardless of their occupation, most people leave “data exhaust” in their wake, a kind of digital aura that can reveal a lot about a potential hire. Donald Kluemper, a professor of management at the University of Illinois at Chicago, has found that professionally relevant personality traits can be judged effectively merely by scanning Facebook feeds and photos. LinkedIn, of course, captures an enormous amount of professional data and network information, across just about every profession. A controversial start-up called Klout has made its mission the measurement and public scoring of people’s online social influence.

These aspects of people analytics provoke anxiety, of course. We would be wise to take legal measures to ensure, at a minimum, that companies can’t snoop where we have a reasonable expectation of privacy—and that any evaluations they might make of our professional potential aren’t based on factors that discriminate against classes of people.

But there is another side to this. People analytics will unquestionably provide many workers with more options and more power. Gild, for example, helps companies find undervalued software programmers, working indirectly to raise those people’s pay. Other companies are doing similar work. One called Entelo, for instance, specializes in using algorithms to identify potentially unhappy programmers who might be receptive to a phone call (because they’ve been unusually active on their professional-networking sites, or because there’s been an exodus from their corner of their company, or because their company’s stock is tanking). As with Gild, the service benefits the worker as much as the would-be employer.

Big tech companies are responding to these incursions, and to increasing free agency more generally, by deploying algorithms aimed at keeping their workers happy. Dawn Klinghoffer, the senior director of HR business insights at Microsoft, told me that a couple of years ago, with attrition rising industry-wide, her team started developing statistical profiles of likely leavers (hires straight from college in certain technical roles, for instance, who had been with the company for three years and had been promoted once, but not more than that). The company began various interventions based on these profiles: the assignment of mentors, changes in stock vesting, income hikes. Microsoft focused on two business units with particularly high attrition rates—and in each case reduced those rates by more than half.

Over time, better job-matching technologies are likely to begin serving people directly, helping them see more clearly which jobs might suit them and which companies could use their skills. In the future, Gild plans to let programmers see their own profiles and take skills challenges to try to improve their scores. It intends to show them its estimates of their market value, too, and to recommend coursework that might allow them to raise their scores even more. Not least, it plans to make accessible the scores of typical hires at specific companies, so that software engineers can better see the profile they’d need to land a particular job. Knack, for its part, is making some of its video games available to anyone with a smartphone, so people can get a better sense of their strengths, and of the fields in which their strengths would be most valued. (Palo Alto High School recently adopted the games to help students assess careers.) Ultimately, the company hopes to act as matchmaker between a large network of people who play its games (or have ever played its games) and a widening roster of corporate clients, each with its own specific profile for any given type of job.

Knack and Gild are very young companies; either or both could fail. But even now they are hardly the only companies doing this sort of work. The digital trail from assessment to hire to work performance and work engagement will quickly discredit models that do not work—but will also allow the models and companies that survive to grow better and smarter over time. It is conceivable that we will look back on these endeavors in a decade or two as nothing but a fad. But early evidence, and the relentlessly empirical nature of the project as a whole, suggests otherwise.

When I began my reporting for this story, I was worried that people analytics, if it worked at all, would only widen the divergent arcs of our professional lives, further gilding the path of the meritocratic elite from cradle to grave, and shutting out some workers more definitively. But I now believe the opposite is likely to happen, and that we’re headed toward a labor market that’s fairer to people at every stage of their careers. For decades, as we’ve assessed people’s potential in the professional workforce, the most important piece of data—the one that launches careers or keeps them grounded—has been educational background: typically, whether and where people went to college, and how they did there. Over the past couple of generations, colleges and universities have become the gatekeepers to a prosperous life. A degree has become a signal of intelligence and conscientiousness, one that grows stronger the more selective the school and the higher a student’s GPA, that is easily understood by employers, and that, until the advent of people analytics, was probably unrivaled in its predictive powers. And yet the limitations of that signal—the way it degrades with age, its overall imprecision, its many inherent biases, its extraordinary cost—are obvious. “Academic environments are artificial environments,” Laszlo Bock, Google’s senior vice president of people operations, told The New York Times in June. “People who succeed there are sort of finely trained, they’re conditioned to succeed in that environment,” which is often quite different from the workplace.

One of the tragedies of the modern economy is that because one’s college history is such a crucial signal in our labor market, perfectly able people who simply couldn’t sit still in a classroom at the age of 16, or who didn’t have their act together at 18, or who chose not to go to graduate school at 22, routinely get left behind for good. That such early factors so profoundly affect career arcs and hiring decisions made two or three decades later is, on its face, absurd.

But this relationship is likely to loosen in the coming years. I spoke with managers at a lot of companies who are using advanced analytics to reevaluate and reshape their hiring, and nearly all of them told me that their research is leading them toward pools of candidates who didn’t attend college—for tech jobs, for high-end sales positions, for some managerial roles. In some limited cases, this is because their analytics revealed no benefit whatsoever to hiring people with college degrees; in other cases, and more often, it’s because they revealed signals that function far better than college history, and that allow companies to confidently hire workers with pedigrees not typically considered impressive or even desirable. Neil Rae, an executive at Transcom, told me that in looking to fill technical-support positions, his company is shifting its focus from college graduates to “kids living in their parents’ basement”—by which he meant smart young people who, for whatever reason, didn’t finish college but nevertheless taught themselves a lot about information technology. Laszlo Bock told me that Google, too, is hiring a growing number of nongraduates. Many of the people I talked with reported that when it comes to high-paying and fast-track jobs, they’re reducing their preference for Ivy Leaguers and graduates of other highly selective schools.

This process is just beginning. Online courses are proliferating, and so are online markets that involve crowd-sourcing. Both arenas offer new opportunities for workers to build skills and showcase competence. Neither produces the kind of instantly recognizable signals of potential that a degree from a selective college, or a first job at a prestigious firm, might. That’s a problem for traditional hiring managers, because sifting through lots of small signals is so difficult and time-consuming. (Is it meaningful that a candidate finished in the top 10 percent of students in a particular online course, or that her work gets high ratings on a particular crowd-sourcing site?) But it’s completely irrelevant in the field of people analytics, where sophisticated screening algorithms can easily make just these sorts of judgments. That’s not only good news for people who struggled in school; it’s good news for people who’ve fallen off the career ladder through no fault of their own (older workers laid off in a recession, for instance) and who’ve acquired a sort of professional stink that is likely undeserved.

Ultimately, all of these new developments raise philosophical questions. As professional performance becomes easier to measure and see, will we become slaves to our own status and potential, ever-focused on the metrics that tell us how and whether we are measuring up? Will too much knowledge about our limitations hinder achievement and stifle our dreams? All I can offer in response to these questions, ironically, is my own gut sense, which leads me to feel cautiously optimistic. But most of the people I interviewed for this story—who, I should note, tended to be psychologists and economists rather than philosophers—share that feeling.

Scholarly research strongly suggests that happiness at work depends greatly on feeling a sense of agency. If the tools now being developed and deployed really can get more people into better-fitting jobs, then those people’s sense of personal effectiveness will increase. And if those tools can provide workers, once hired, with better guidance on how to do their jobs well, and how to collaborate with their fellow workers, then those people will experience a heightened sense of mastery. It is possible that some people who now skate from job to job will find it harder to work at all, as professional evaluations become more refined. But on balance, these strike me as developments that are likely to make people happier.

Nobody imagines that people analytics will obviate the need for old-fashioned human judgment in the workplace. Google’s understanding of the promise of analytics is probably better than anybody else’s, and the company has been changing its hiring and management practices as a result of its ongoing analyses. (Brainteasers are no longer used in interviews, because they do not correlate with job success; GPA is not considered for anyone more than two years out of school, for the same reason—the list goes on.) But for all of Google’s technological enthusiasm, these same practices are still deeply human. A real, live person looks at every résumé the company receives. Hiring decisions are made by committee and are based in no small part on opinions formed during structured interviews.

One only has to look to baseball, in fact, to see where this all may be headed. In their forthcoming book, The Sabermetric Revolution, the sports economist Andrew Zimbalist and the mathematician Benjamin Baumer write that the analytical approach to player acquisition employed by Billy Beane and the Oakland A’s has continued to spread through Major League Baseball. Twenty-six of the league’s 30 teams now devote significant resources to people analytics. The search for ever more precise data—about the spin rate of pitches, about the muzzle velocity of baseballs as they come off the bat—has intensified, as has the quest to turn those data into valuable nuggets of insight about player performance and potential. Analytics has taken off in other pro sports leagues as well. But here’s what’s most interesting. The big blind spots initially identified by analytics in the search for great players are now gone—which means that what’s likely to make the difference again is the human dimension of the search.

The A’s made the playoffs again this year, despite a small payroll. Over the past few years, the team has expanded its scouting budget. “What defines a good scout?,” Billy Beane asked recently. “Finding out information other people can’t. Getting to know the kid. Getting to know the family. There’s just some things you need to find out in person.”

DON PECK is the deputy editor of The Atlantic magazine and the author of Pinched: How the Great Recession Has Narrowed Our Futures and What We Can Do About It.

How Google Sold Its Engineers on Management – by David A. Garvin

A few years into the company’s life, founders Larry Page and Sergey Brin actually wondered whether Google needed any managers at all. In 2002 they experimented with a completely flat organization, eliminating engineering managers in an effort to break down barriers to rapid idea development and to replicate the collegial environment they’d enjoyed in graduate school. That experiment lasted only a few months: They relented when too many people went directly to Page with questions about expense reports, interpersonal conflicts, and other nitty-gritty issues. And as the company grew, the founders soon realized that managers contributed in many other, important ways—for instance, by communicating strategy, helping employees prioritize projects, facilitating collaboration, supporting career development, and ensuring that processes and systems aligned with company goals.

Google now has some layers but not as many as you might expect in an organization with more than 37,000 employees: just 5,000 managers, 1,000 directors, and 100 vice presidents. It’s not uncommon to find engineering managers with 30 direct reports. Flatt says that’s by design, to prevent micromanaging. “There is only so much you can meddle when you have 30 people on your team, so you have to focus on creating the best environment for engineers to make things happen,” he notes. Google gives its rank and file room to make decisions and innovate. Along with that freedom comes a greater respect for technical expertise, skillful problem solving, and good ideas than for titles and formal authority. Given the overall indifference to pecking order, anyone making a case for change at the company needs to provide compelling logic and rich supporting data. Seldom do employees accept top-down directives without question.

Google downplays hierarchy and emphasizes the power of the individual in its recruitment efforts, as well, to achieve the right cultural fit. Using a rigorous, data-driven hiring process, the company goes to great lengths to attract young, ambitious self-starters and original thinkers. It screens candidates’ résumés for markers that indicate potential to excel there—especially general cognitive ability. People who make that first cut are then carefully assessed for initiative, flexibility, collaborative spirit, evidence of being well-rounded, and other factors that make a candidate “Googley.”

So here’s the challenge Google faced: If your highly skilled, handpicked hires don’t value management, how can you run the place effectively? How do you turn doubters into believers, persuading them to spend time managing others? As it turns out, by applying the same analytical rigor and tools that you used to hire them in the first place—and that they set such store by in their own work. You use data to test your assumptions about management’s merits and then make your case.

To understand how Google set out to prove managers’ worth, let’s go back to 2006, when Page and Brin brought in Laszlo Bock to head up the human resources function—appropriately called people operations, or people ops. From the start, people ops managed performance reviews, which included annual 360-degree assessments. It also helped conduct and interpret the Googlegeist employee survey on career development goals, perks, benefits, and company culture. A year later, with that foundation in place, Bock hired Prasad Setty from Capital One to lead a people analytics group. He challenged Setty to approach HR with the same empirical discipline Google applied to its business operations.

Setty took him at his word, recruiting several PhDs with serious research chops. This new team was committed to leading organizational change. “I didn’t want our group to be simply a reporting house,” Setty recalls. “Organizations can get bogged down in all that data. Instead, I wanted us to be hypothesis-driven and help solve company problems and questions with data.”

People analytics then pulled together a small team to tackle issues relating to employee well-being and productivity. In early 2009 it presented its initial set of research questions to Setty. One question stood out, because it had come up again and again since the company’s founding: Do managers matter?

To find the answer, Google launched Project Oxygen, a multiyear research initiative. It has since grown into a comprehensive program that measures key management behaviors and cultivates them through communication and training. By November 2012, employees had widely adopted the program—and the company had shown statistically significant improvements in multiple areas of managerial effectiveness and performance.

Google is one of several companies that are applying analytics in new ways. Until recently, organizations used data-driven decision making mainly in product development, marketing, and pricing. But these days, Google, Procter & Gamble, Harrah’s, and others take that same approach in addressing human resources needs. (See “Competing on Talent Analytics,” by Thomas H. Davenport, Jeanne Harris, and Jeremy Shapiro, HBR October 2010.)

Unfortunately, scholars haven’t done enough to help these organizations understand and improve day-to-day management practice. Compared with leadership, managing remains understudied and undertaught—largely because it’s so difficult to describe, precisely and concretely, what managers actually do. We often say that they get things done through other people, yet we don’t usually spell out how in any detail. Project Oxygen, in contrast, was designed to offer granular, hands-on guidance. It didn’t just identify desirable management traits in the abstract; it pinpointed specific, measurable behaviors that brought those traits to life.

That’s why Google employees let go of their skepticism and got with the program. Project Oxygen mirrored their decision-making criteria, respected their need for rigorous analysis, and made it a priority to measure impact. Data-driven cultures, Google discovered, respond well to data-driven change.

Making the CaseProject Oxygen colead Neal Patel recalls, “We knew the team had to be careful. Google has high standards of proof, even for what, at other places, might be considered obvious truths. Simple correlations weren’t going to be enough. So we actually ended up trying to prove the opposite case—that managers don’t matter. Luckily, we failed.”

To begin, Patel and his team reviewed exit-interview data to see if employees cited management issues as a reason for leaving Google. Though they found some connections between turnover rates and low satisfaction with managers, those didn’t apply to the company more broadly, given the low turnover rates overall. Nor did the findings prove that managers caused attrition.

As a next step, Patel examined Googlegeist ratings and semiannual reviews, comparing managers on both satisfaction and performance. For both dimensions, he looked at the highest and lowest scorers (the top and bottom quartiles).

“At first,” he says, “the numbers were not encouraging. Even the low-scoring managers were doing pretty well. How could we find evidence that better management mattered when all managers seemed so similar?” The solution came from applying sophisticated multivariate statistical techniques, which showed that even “the smallest incremental increases in manager quality were quite powerful.”

For example, in 2008, the high-scoring managers saw less turnover on their teams than the others did—and retention was related more strongly to manager quality than to seniority, performance, tenure, or promotions. The data also showed a tight connection between managers’ quality and workers’ happiness: Employees with high-scoring bosses consistently reported greater satisfaction in multiple areas, including innovation, work-life balance, and career development.

In light of this research, the Project Oxygen team concluded that managers indeed mattered. But to act on that finding, Google first had to figure out what its best managers did. So the researchers followed up with double-blind qualitative interviews, asking the high- and low-scoring managers questions such as “How often do you have career development discussions with your direct reports?” and “What do you do to develop a vision for your team?” Managers from Google’s three major functions (engineering, global business, and general and administrative) participated; they came from all levels and geographies. The team also studied thousands of qualitative comments from Googlegeist surveys, performance reviews, and submissions for the company’s Great Manager Award. (Each year, Google selects about 20 managers for this distinction, on the basis of employees’ nominations.) It took several months to code and process all this information.

After much review, Oxygen identified eight behaviors shared by high-scoring managers. (See the sidebar “What Google’s Best Managers Do” for the complete list.) Even though the behaviors weren’t terribly surprising, Patel’s colead, Michelle Donovan, says, “we hoped that the list would resonate because it was based on Google data. The attributes were about us, by us, and for us.”

The key behaviors primarily describe leaders of small and medium-sized groups and teams and are especially relevant to first- and second-level managers. They involve developing and motivating direct reports, as well as communicating strategy and eliminating roadblocks—all vital activities that people tend to overlook in the press of their day-to-day responsibilities.

Putting the Findings into PracticeThe list of behaviors has served three important functions at Google: giving employees a shared vocabulary for discussing management, offering them straightforward guidelines for improving it, and encapsulating the full range of management responsibilities. Though the list is simple and straightforward, it’s enriched by examples and descriptions of best practices—in survey participants’ own words. These details make the overarching principles, such as “empowers the team and does not micromanage,” more concrete and show managers different ways of enacting them. (See the exhibit “How Google Defines One Key Behavior.”)

The descriptions of the eight behaviors also allow considerable tailoring. They’re inclusive guidelines, not rigid formulas. That said, it was clear early on that managers would need help adopting the new standards, so people ops built assessments and a training program around the Oxygen findings.

To improve the odds of acceptance, the group customized the survey instrument, creating an upward feedback survey (UFS) for employees in administrative and global business functions and a tech managers survey (TMS) for the engineers. Both assessments asked employees to evaluate their managers (using a five-point scale) on a core set of activities—such as giving actionable feedback regularly and communicating team goals clearly—all of which related directly to the key management behaviors.

The first surveys went out in June 2010—deliberately out of sync with performance reviews, which took place in April and September. (Google had initially considered linking the scores with performance reviews but decided that would increase resistance to the Oxygen program because employees would view it as a top-down imposition of standards.) People ops emphasized confidentiality and issued frequent reminders that the surveys were strictly for self-improvement. “Project Oxygen was always meant to be a developmental tool, not a performance metric,” says Mary Kate Stimmler, an analyst in the department. “We realized that anonymous surveys are not always fair, and there is often a context behind low scores.”

Though the surveys weren’t mandatory, the vast majority of employees completed them. Soon afterward, managers received reports with numerical scores and individual comments—feedback they were urged to share with their teams. (See the exhibit “One Manager’s Feedback” for a representative sample.) The reports explicitly tied individuals’ scores to the eight behaviors, included links to more information about best practices, and suggested actions each manager could take to improve. Someone with, say, unfavorable scores in coaching might get a recommendation to take a class on how to deliver personalized, balanced feedback.

People ops designed the training to be hands-on and immediately useful. In “vision” classes, for example, participants practiced writing vision statements for their departments or teams and bringing the ideas to life with compelling stories. In 2011, Google added Start Right, a two-hour workshop for new managers, and Manager Flagship courses on popular topics such as managing change, which were offered in three two-day modules over six months. “We have a team of instructors,” says people-development manager Kathrin O’Sullivan, “and we are piloting online Google Hangout classes so managers from around the world can participate.”

Managers have expressed few concerns about signing up for the courses and going public with the changes they need to make. Eric Clayberg, for one, has found his training invaluable. A seasoned software-engineering manager and serial entrepreneur, Clayberg had led teams for 18 years before Google bought his latest start-up. But he feels he learned more about management in six months of Oxygen surveys and people ops courses than in the previous two decades. “For instance,” he says, “I was worried about the flat organizational structure at Google; I knew it would be hard to help people on my team get promoted. I learned in the classes about how to provide career development beyond promotions. I now spend a third to half my time looking for ways to help my team members grow.” And to his surprise, his reports have welcomed his advice. “Engineers hate being micromanaged on the technical side,” he observes, “but they love being closely managed on the career side.”

To complement the training, the development team sets up panel discussions featuring high-scoring managers from each function. That way, employees get advice from colleagues they respect, not just from HR. People ops also sends new managers automated e-mail reminders with tips on how to succeed at Google, links to relevant Oxygen findings, and information about courses they haven’t taken.

And Google rewards the behaviors it’s working so hard to promote. The company has revamped its selection criteria for the Great Manager Award to reflect the eight Oxygen behaviors. Employees refer to the behaviors and cite specific examples when submitting nominations. Clayberg has received the award, and he believes it was largely because of the skills he acquired through his Oxygen training. The prize includes a weeklong trip to a destination such as Hawaii, where winners get to spend time with senior executives. Recipients go places in the company, too. “In the last round of promotions to vice president,” Laszlo Bock says, “10% of the directors promoted were winners of the Great Manager Award.”

Measuring ResultsThe people ops team has analyzed Oxygen’s impact by examining aggregate survey data and qualitative input from individuals. From 2010 through 2012, UFS and TMS median favorability scores rose from 83% to 88%. The lowest-scoring managers improved the most, particularly in the areas of coaching and career development. The improvements were consistent across functions, survey categories, management levels, spans of control, and geographic regions.

In an environment of top achievers, people take low scores seriously. Consider vice president Sebastien Marotte, who came to Google in 2011 from a senior sales role at Oracle. During his first six months at Google, Marotte focused on meeting his sales numbers (and did so successfully) while managing a global team of 150 people. Then he received his first UFS scores, which came as a shock. “I asked myself, ‘Am I right for this company? Should I go back to Oracle?’ There seemed to be a disconnect,” he says, “because my manager had rated me favorably in my first performance review, yet my UFS scores were terrible.” Then, with help from a people ops colleague, Marotte took a step back and thought about what changes he could make. He recalls, “We went through all the comments and came up with a plan. I fixed how I communicated with my team and provided more visibility on our long-term strategy. Within two survey cycles, I raised my favorability ratings from 46% to 86%. It’s been tough but very rewarding. I came here as a senior sales guy, but now I feel like a general manager.”

Overall, other managers took the feedback as constructively as Marotte did—and were especially grateful for its specificity. Here’s what Stephanie Davis, director of large-company sales and another winner of the Great Manager Award, says she learned from her first feedback report: “I was surprised that one person on my team didn’t think I had regularly scheduled one-on-one meetings. I saw this person every day, but the survey helped me realize that just seeing this person was different from having regularly scheduled individual meetings. My team also wanted me to spend more time sharing my vision. Personally, I have always been inspired by Eric [Schmidt], Larry, and Sergey; I thought my team was also getting a sense of the company’s vision from them. But this survey gave my team the opportunity to explain that they wanted me to interpret the higher-level vision for them. So I started listening to the company’s earnings call with a different ear. I didn’t just come back to my team with what was said; I also shared what it meant for them.”

Chris Loux, head of global enterprise renewals, remembers feeling frustrated with his low UFS scores. “I had received a performance review indicating that I was exceeding expectations,” he says, “yet one of my direct reports said on the UFS that he would not recommend me as a manager. That struck me, because people don’t quit companies—they quit managers.” At the same time, Loux struggled with the question of just how much to push the lower performers on his team. “It’s hard to give negative feedback to a type-A person who has never received bad feedback in his or her life,” he explains. “If someone gets 95% favorable on the UFS, I wonder if that manager is avoiding problems by not having tough conversations with reports on how they can get better.”

Loux isn’t the only Google executive to speculate about the connection between employees’ performance reviews and their managers’ feedback scores. That question came up multiple times during Oxygen’s rollout. To address it, the people analytics group fell back on a time-tested technique—going back to the data and conducting a formal analysis to determine whether a manager who gave someone a negative performance review would then receive a low feedback rating from that employee. After looking at two quarters’ worth of survey data from 2011, the group found that changes in employee performance ratings (both upward and downward) accounted for less than 1% of variability in corresponding manager ratings across all functions at Google.

“Managing to the test” doesn’t appear to be a big risk, either. Because the eight behaviors are rooted in action, it’s difficult for managers to fake them in pursuit of higher ratings. In the surveys, employees don’t assess their managers’ motivations, values, or beliefs; rather, they evaluate the extent to which their managers demonstrate each behavior. Either the manager has acted in the ways recommended—consistently and credibly—or she has not. There is very little room for grandstanding or dissembling.

“We are not trying to change the nature of people who work at Google,” says Bock. “That would be presumptuous and dangerous. Instead, we are saying, ‘Here are a few things that will lead you to be perceived as a better manager.’ Our managers may not completely believe in the suggestions, but after they act on them and get better UFS and TMS scores, they may eventually internalize the behavior.”

Project Oxygen does have its limits. A commitment to managerial excellence can be hard to maintain over the long haul. One threat to sustainability is “evaluation overload.” The UFS and the TMS depend on employees’ goodwill. Googlers voluntarily respond on a semiannual basis, but they’re asked to complete many other surveys as well. What if they decide that they’re tired of filling out surveys? Will response rates bottom out? Sustainability also depends on the continued effectiveness of managers who excel at the eight behaviors, as well as those behaviors’ relevance to senior executive positions. A disproportionate number of recently promoted vice presidents had won the Great Manager Award, a reflection of how well they’d followed Oxygen’s guidelines. But what if other behaviors—those associated with leadership skills—matter more in senior positions?

Further, while survey scores gauge employees’ satisfaction and perceptions of the work environment, it’s unclear exactly what impact those intangibles have on such bottom-line measures as sales, productivity, and profitability. (Even for Google’s high-powered statisticians, those causal relationships are difficult to establish.) And if the eight behaviors do actually benefit organizational performance, they still might not give Google a lasting edge. Companies with similar competitive profiles—high-tech firms, for example, that are equally data-driven—can mimic Google’s approach, since the eight behaviors aren’t proprietary.

Still, Project Oxygen has accomplished what it set out to do: It not only convinced its skeptical audience of Googlers that managers mattered but also identified, described, and institutionalized their most essential behaviors. Oxygen applied the concept of data-driven continuous improvement directly—and successfully—to the soft skills of management. Widespread adoption has had a significant impact on how employees perceive life at Google—particularly on how they rate the degree of collaboration, the transparency of performance evaluations, and their groups’ commitment to innovation and risk taking.

At a company like Google, where the staff consists almost entirely of “A” players, managers have a complex, demanding role to play. They must go beyond overseeing the day-to-day work and support their employees’ personal needs, development, and career planning. That means providing smart, steady feedback to guide people to greater levels of achievement—but intervening judiciously and with a light touch, since high-performing knowledge workers place a premium on autonomy. It’s a delicate balancing act to keep employees happy and motivated through enthusiastic cheerleading while helping them grow through stretch assignments and carefully modulated feedback. When the process works well, it can yield extraordinary results.That’s why Prasad Setty wants to keep building on Oxygen’s findings about effective management practice. “We will have to start thinking about what else drives people to go from good to great,” he says. His team has begun analyzing managers’ assessment scores by personality type, looking for patterns. “With Project Oxygen, we didn’t have these endogenous variables available to us,” he adds. “Now we can start to tease them out, using more of an ethnographic approach. It’s really about observations—staying with people and studying their interactions. We’re not going to have the capacity to follow tons of people, but what we’ll lose in terms of numbers, we’ll gain in a deeper understanding of what managers and their teams experience.”

That, in a nutshell, is the principle at the heart of Google’s approach: deploying disciplined data collection and rigorous analysis—the tools of science—to uncover deeper insights into the art and craft of management.

David A. Garvin is the C. Roland Christensen Professor of Business Administration at Harvard Business School. This article draws on material in the HBS case study “Google’s Project Oxygen: Do Managers Matter?” (case number 9-313-110, published April 2013).

How Virtual Humans Can Build Better Leaders – by Randall W. Hill, Jr

 

20140728_2The aviation industry has long relied on flight simulators to train pilots to handle challenging situations. These simulations are an effective way for pilots to learn from virtual experiences that would be costly and difficult or dangerous to provide in the real world.

And yet in business, leaders commonly find themselves in tricky situations for which they haven’t trained. From conducting performance reviews to negotiating with peers, they need practice to help navigate the interpersonal dynamics that come into play in interactions where emotions run high and mistakes can result in lost deals, damaged relationships, or even harm to their — or their company’s — reputation.

Some companies, particularly those with substantial resources, do use live-role playing in management and other training. But this training is expensive and limited by time and availability constraints, and lack of consistency. Advances in artificial intelligence and computer graphics are now enabling the equivalent of flight simulators for social skills – simulators that have the potential to overcome these problems. These simulations can provide realistic previews of what leaders might encounter on the job, engaging role-play interactions, and constructive performance feedback for one-on-one conversations or complex dynamics involving multiple groups or departments.

Over the past fifteen years, our U.S. Army-funded research institute has been advancing both the art and science behind virtual human role players, computer generated characters that look and act like real people, and social simulations — computer models of individual and group behavior. Thousands of service men and women are now getting virtual reality and video game-based instruction and practice in how to counsel fellow soldiers, how to conduct cross-cultural negotiations and even in how to anticipate how decisions will be received by different groups across, and outside of, an organization. Other efforts provide virtual human role players to help train law students in interviewing child witnesses, budding clinicians in how to improve their diagnostic skills and bedside manner, and young adults on the autism spectrum disorders in how to answer questions in a job interview.

Our research is exploring how to build resilience by taking people through stressful virtual situations, like the loss of a comrade, child or leader, before they face them in reality. We are also developing virtual humans that can detect a person’s non-verbal behaviors and react and respond accordingly. Automated content creation tools allow for customized scenarios and new software and off-the-shelf hardware are making it possible to create virtual humans modeled on any particular person. It could be you, your boss, or a competitor.

Imagine facing a virtual version of the person you have to lay off. Might you treat him or her differently than a generic character? What if months of preparation for an international meeting went awry just because you declined a cup of tea? Wouldn’t you wish you’d practiced for that? If a virtual audience programmed to react based on your speaking style falls asleep during your speech, I’d be surprised if you didn’t you pep up your presentation before facing a real crowd.

It is still early days in our virtual-human development work, but the results are promising. An evaluation of ELITE (emergent leader immersive training environment), the performance review training system we developed for junior and noncommissioned officers, found that students showed an increase in retention and application of knowledge, an increase in confidence using the skills, and awareness of the importance of interpersonal communication skills for leadership.

A related study showed that subjects found the virtual human interaction as engaging and compelling as the same interaction with a live human role-player. I can say from personal experience that asking questions of the students in a virtual classroom can be exhilarating (and unnerving when the virtual student acts just like a “real” student, slouching in boredom and mumbling an answer). Unlike a live human actor, however, a virtual human does not need to be paid, can work anytime, and can be consistent with all students, or take a varied approach if needed. Virtual human systems can have the added advantage of built-in assessment tools to track and evaluate a performance.

Technology alone is not the answer, of course As I recently wrote in “Virtual Reality and Leadership Development,” a chapter of the book Using Experience to Develop Leadership Talent, virtual humans and video game-based systems are only as effective as the people who program them. No matter how convincing a virtual human is, it’s just an interface. If the instructional design behind it is flawed it won’t be effective. So we focus as intensively on what a virtual human is designed to teach, how learning will occur, and how to continuously improve its performance as on the technology itself.

I believe simulation technologies are going to change the way we educate and train the workforce, particularly in the area of social skills. In time, just as a pilot shouldn’t fly without practicing in a simulator first, managers and leaders will routinely practice with virtual humans for the challenging situation they’re sure to encounter.

80-Orli-BelmanRandall W. Hill, Jr., is the executive director of the University of Southern California Institute for Creative Technologies and an expert in how virtual reality and video games can be used to develop effective learning experiences. He is also a research professor of computer science at USC

2014: The Year Social HR Matters – by Jeanne Meister

socialhr2

In 2013, organizations finally began in earnest to integrate social technologies into recruitment, development and engagement practices. In 2014, this social integration will become the status quo.

The digital immigrants have now caught up to the digital natives – we are now all digital citizens. The fastest growing demographic on Google+ is 45-54 and on Twitterit is 55-64!

And it’s a good thing that baby boomers and other older generations have embraced these tools, because using social media inside companies will be increasingly important in 2014 and beyond.

For one thing, this year we’ll see more forward-thinking HR leaders making the connection between having a solid social media strategy and finding top talent. After all, 47 percent of Millennials now say a prospective employer’s online reputation matters as much as the job it offers, according to a survey by Spherion Staffing.

The year will also see a new phase of what I call “the consumerization of HR,” wherein employees not only demand to bring their own devices to work, but also want to use these mobile devices to change the way they work with peers, communicate with their manager and even interact with the HR department.

Employees are requesting to view new job postings on their tablets, learn and collaborate with peers on their smartphones, and provide feedback on a team member’s performance with the click of a button. According to a Microsoftsurvey of 9,000 workers across 32 countries, 31 percent would be willing to spend their own money on a new social tool if it made them more efficient at work. This last finding is quite interesting as it shows the extent to which Millennial employees, who will make up 50% of the 2020 workplace, see the business value of using technology on the job.

2014 is the year HR departments must start creating “social media playbooks” to determine their game plans.

Looking at the big picture helps to determine those priorities. Here are seven social media trends to watch in the coming year as organizations leverage all forms of social collaboration to re-imagine how they source, develop and engage employees.

1. Big Data Lets New Jobs Find You Before You Even Know You’re Looking

Amid our nation’s legendary dearth of skilled workers, talent acquisition has risen to the top of the CEO agenda. According to PwC’s global CEO Study, 66 percent of CEOs say that the absence of necessary skills is their biggest talent challenge. Eighty-three percent say they’re working to change their recruiting strategies to address that fact.

Meanwhile, a host of big data recruiting firms are set to benefit from the newly emphasized value being placed on recruiting. These firms tout that they can find new talent before the prospective employees even know they are in the job market.  Companies such as EnteloGildTalentBin and the U.K.’sthesocialCV analyze not just a job candidate’s LinkedIn profile, Twitter feed and Facebook postings, but also their activity on specialty sites specific to their professions, such as the open-source community forums StackOverflow andGitHub (for coders) Proformative (for accountants), and  Dribbble (for designers.) This approach to recruitment is creating a new technical world order where job applicants are found and evaluated by their merits and contributions, rather than by how well they sell themselves in an interview.

These companies, at the intersection of Big Data and Recruiting, have made a science out of locating “hard to find” talent. Gild does it by scouring the Internet for clues: Is his or her code well regarded by other programmers? Does it get reused? How does the programmer communicate ideas? How does he or she relate on social media sites? How big are their networks and who is in them?

Entelo and TalentBin take a different approach: Their search tools consider the experience and history mentioned in users’ profiles, but also their use of social networks. These companies can pinpoint users who have updated their bios lately or often, to determine which candidates are getting ready to enter the job market.

Getting this head start on head hunting is crucial as corporations’ search for top candidates becomes ever more competitive. The goal: finding talent invisible on widely popular social platforms before your competitor does.

2. Mobile Apps Are the New Job-Search Frontier

According to a study of Fortune 500 companies conducted by CareerBuilder, 39% of the US population uses tablet devices. A recent survey conducted byGlassdoor.com even found that 43 percent of job candidates’ research their prospective employer and read the job description on their mobile device just 15 minutes prior to their interviews.  And yet, only 20 percent of Fortune 500companies have a mobile-optimized career site.

The rest of the 80% of companies are missing the fact that tablet andsmartphone users expect to see job listings and information in a visual way, one that reflects the visual approach they bring to their personal lives on the Web.

The food-services corporation Sodexo, the 20th-largest employer in the U.S., got a head start in that process in early 2012, when it developed both a mobile-optimized career site and a smartphone app to pull together all the information about the company’s recruiting efforts into one easy-for-Millennials-to-access place. Prospective employees could visit the mobile app to search and apply for jobs, join a talent community, receive job alerts, and get an insider’s view about what it’s like to work for Sodexo.

The results according to Arie Ball, VP Talent Acquisition at Sodexo, 17 percent of job traffic from potential new hires now comes from the mobile app versus just 2 percent of mobile traffic in early 2012. In the first year, mobile app downloads totaled 15,000, leading to over 2,000 new job candidates and 141 actual new hires, all while saving the company $300,000 in job board postings.

Organizations need to keep pace with the way prospective employees live their lives, and being able to access a mobile app in the job search process will become standard in 2014.

3. Companies Use Gamification In The Workplace

Over 60 percent of the Western world’s population plays video games, and companies are taking note of the huge numbers of future prospective employees who love to play Angry BirdsFruit NinjaCandy Crush, andWorld of Warcraft.

Gamification in the business context is taking the essence of games—attributes like puzzles, play, transparency, design and competition—and applying them to a range of real-world processes inside an organization, from new hire on-boarding, to learning & development, and health & wellness.

Video game-players are known for being singularly focused while at play. So naturally, companies have begun to ask how they can harness that same level of engagement and apply it to critical problem-solving, on-boarding new hires or developing new leaders?

With technology research firm Gartner predicting that 40 percent of global Fortune 1,000 companies will soon use gamification as primary method to transform their business processes, 2013 saw a number of them leveraging game mechanics as a tool to drive higher levels of business performance.

NTT Data, which I profiled in my previous Forbes column, Gamification in Leadership development, has been using gamification to develop leaders, and it is already seeing results. The company’s “Ignite Leadership” Game, aligned with its overall employee engagement framework, was created to develop five key skills for leaders: negotiation, communication, time management, change management and problem solving.  To date, a total of 70 leaders have completed the gamified leadership program, and 50 employees ended up taking on team leadership roles – that’s 50% higher than had done so through traditional training and coaching methods. Plus, these “graduates” of the Ignite Leadership Game generated 220 new ideas in their roles as leaders, which led to a 40 percent increase in employee satisfaction and helped lower attrition by 30 percent.

Gamification in the workplace is not just about using badges, mission and leaderboards. Instead, the strategy is about truly understanding who you are trying to engage, what motivates them, and how gamification can change the way they work, communicate and innovate with peers and customers.

4. Re-think The Performance Review

The annual performance review is dead. When 750 senior level HR professionals were recently asked to grade their current performance management system, 60% gave it a grade of C or below, according toWorldAtWork. In another survey conducted Globoforceand SHRM 45% of human resources leaders don’t think annual performance reviews are an accurate appraisal for employees’ work. So what is happening in its place?

There are two innovations on this front. First companies are leveraging the wisdom of the crowds and discovering that by leveraging social recognition data managers are able to continuously collect information on employee performance. This result is an on-going dialogue rather than a once a year review.

Now that you have collected various data inputs, some companies are going one-step further to create a new process focusing on having a “Check-In. The software company Adobe now relies on managers controlling how often and in what form they provide feedback. The Check-In –is an informal system of real-time feedback, which has no forms to fill out or submit to HR.

Instead, managers are trained in how to conduct a check-in and how to focus the conversation on key goals, objectives, development and strategies for improvement and how to leverage the wisdom of the crowds to create a holistic view of one’s performance. And most importantly, employees are evaluated on the basis of what they achieved against their own goals, rather than how they compare to their peers.

According to Donna Morris, Adobe’s Senior V.P. of Global People & Places, the company has saved 80,000 hours of management time by replacing its old process, and voluntary attrition is now at an all time low of 6.7 percent.

The goal here is to make key HR processes more transparent, leverage the wisdom of the crowds and to democratize the flow of information throughout the organization.

5. Learning Will Be Social and Happen Anywhere & Anytime

In my book The 2020 Workplace, an entire chapter was devoted to how and why companies are adopting social learning. I then created the Social Learning Boot Camp  profiling companies re-imaging learning.. All that research boiled down to one realization: social learning is not new; in fact, we have always learned from one another in the workplace. Only now that social media has revolutionized how we communicate in our personal lives, organizations are bringing “social” inside the enterprise and adopting tools such asYammerAdobe Connect and Google Hangouts to make it far easier to find experts, collaborate with peers and learn both from and with colleagues.

The results have been impressive, ranging from increased employee collaboration to re-imagining face-to-face learning programs. Nationwide Insurance, for example, now has nearly all of its 36,000 employees active on its internal social platform, making it far easier for employees to find subject experts and solve business problems in one fell swoop, rather than sending copious emails or searching through hard drives.

At Montefiore Hospital, which has nearly 50 primary care locations throughout the New York metropolitan area, social learning was introduced in order to build a sense of community and connection among employees, while creating a shared mental model of leadership. One of the assignments during the leadership development program was to co-create a new behavioral interview guide using the hospital’s social collaboration platform. So rather than just talking in class about the new interview guide, the participants were able to actually co create a new guide using Yammer.

Finally, the consulting firm Accenture (disclosure: my former employer), which has over 260,000 employees in more than 120 countries, has gone so far as to add gamification to its social collaboration platform. Like other firms adopting gamification, Accenture studied what motivates people to compete while gaming, and then harnessed those principles to spur collaboration and enhance peer-to-peer networking, to solve client programs.

As yourself: are you thinking of social learning as another delivery mode rather than a new way of working and communicating.

6. MOOC’s Will Revolutionize Corporate Learning & Development

As noted in my blog post last year on “How MOOCs Will Revolutionize Corporate Learning & Development,” the buzz about MOOCs (Massive, Open, Online Courses) has focused on the  disruption they will bring to institutions of higher education. But far from being limited to that sphere, MOOCs most important legacy may in fact be its impact on the world of corporate training – a $150 billion industry.

The early MOOC report card shows a few standout examples of corporate partnerships that use MOOCs to replace certain executive education courses. For example, Yahoo and Coursera have joined forces enabling Yahoo to fund employees for verified Coursera certificates in computer science, priced at under $100 each. This represents a huge savings compared to what Yahoo would normally spend on university-sponsored executive education.

But other companies are creating their own versions of MOOCs – within the company. Sometimes the courses exist to train prospective candidates in the skills they need to be considered for employment, as a sort of train before hire process. Aquent, a staffing firm with over 8,000  employees, recently launched its first in an ongoing series of MOOCs to teach creative professionals how to use emerging technologies. Aquent calls its program “Aquent Gymnasium,” drawing on the connotations of the gym as a place for fun, training and engaging in a series of “work-outs.”

Aquent offers an interesting example of a company developing a MOOC strategy that is an industry game-changer. Rather than search for job candidates based upon the spec’s given to them by their clients, Aquent flipped the process, instead creating a brand of MOOCs to help candidates develop the skills Aquent‘s clients will seek.

And importantly, Aquent is doing all this with real-world practitioners as instructors rather than university professors.

The results: after its first year of offering a range of MOOCs in Aquent Gymnasium, Aquent had a 10-times return on the program’s investment.

7. Capture Your Organizational Klout   

Klout, which calls itself “the SAT score for business professionals,” measures each user’s online “influence.” A Klout score is a statistical score from 1-100, which ranks you on variables such as: how many people you reach through social media; how much they trust you; and on what topics you are perceived a thought leader.

To date, most users have focused on building and measuring their individual Klout hoping this will help in landing them a new job or promotion.

In the year ahead, the focus will also be on Klout for Business. That’s because in June of 2013, Yammer and Klout announced a partnership that allows Kloutto factor Yammer users’ data and activity into its social ranking algorithm, and also lets Yammer users display their Klout scores on their Yammerprofiles.

For employees, these data points can mean the difference between a raise, a promotion or staying in the same job. For employer, the ability to assess individual employee expertise at scale can enable companies to take a more strategic approach to what needs to be outsourced and what can be managed internally, based upon the identification of a company’s collective expertise.

Are you and members of your team ready for a year of Social HR? Readers sound off in the comments section and I will respond.

Jeanne Meister is Partner, Future Workplace, co-author of The 2020 Workplace book. You can follow Jeanne on Twitter, connect with her on Linkedin, learn more about the Social Learning Boot Campand sign up here to receive the latest Future Workplace newsletter here.

Thinking for the Future – by David Brooks

We’re living in an era of mechanized intelligence, an age in which you’re probably going to find yourself in a workplace with diagnostic systems, different algorithms and computer-driven data analysis. If you want to thrive in this era, you probably want to be good at working with intelligent machines. As Tyler Cowen puts it in his relentlessly provocative recent book, “Average Is Over,” “If you and your skills are a complement to the computer, your wage and labor market prospects are likely to be cheery. If your skills do not complement the computer, you may want to address that mismatch.”

So our challenge for the day is to think of exactly which mental abilities complement mechanized intelligence. Off the top of my head, I can think of a few mental types that will probably thrive in the years ahead.

Freestylers. As Cowen notes, there’s a style of chess in which people don’t play against the computer but with the computer. They let the computer program make most of the moves, but, occasionally, they overrule it. They understand the strengths and weaknesses of the program and the strengths and weaknesses of their own intuition, and, ideally, they grab the best of both.

This skill requires humility (most of the time) and self-confidence (rarely). It’s the kind of skill you use to overrule your GPS system when you’re driving in a familiar neighborhood but defer to it in strange surroundings. It is the sort of skill a doctor uses when deferring to or overruling a diagnostic test. It’s the skill of knowing when an individual case is following predictable patterns and when there are signs it is diverging from them.

Synthesizers. The computerized world presents us with a surplus of information. The synthesizer has the capacity to surf through vast amounts of online data and crystallize a generalized pattern or story.

Humanizers. People evolved to relate to people. Humanizers take the interplay between man and machine and make it feel more natural. Steve Jobs did this by making each Apple product feel like nontechnological artifact. Someday a genius is going to take customer service phone trees and make them more human. Someday a retail genius is going to figure out where customers probably want automated checkout (the drugstore) and where they want the longer human interaction (the grocery store).

Conceptual engineers. Google presents prospective employees with challenges like the following: How many times in a day do a clock’s hands overlap? Or: Figure out the highest floor of a 100-story building you can drop an egg from without it breaking. How many drops do you need to figure this out? You can break two eggs in the process.
They are looking for the ability to come up with creative methods to think about unexpected problems.

Motivators. Millions of people begin online courses, but very few actually finish them. I suspect that’s because most students are not motivated to impress a computer the way they may be motivated to impress a human professor. Managers who can motivate supreme effort in a machine-dominated environment are going to be valuable.

Moralizers. Mechanical intelligence wants to be efficient. It will occasionally undervalue essential moral traits, like loyalty. Soon, performance metrics will increasingly score individual employees. A moralizing manager will insist that human beings can’t be reduced to the statistical line. A company without a self-conscious moralizer will reduce human interaction to the cash nexus and end up destroying morale and social capital.

Greeters. An economy that is based on mechanized intelligence is likely to be a wildly unequal economy, even if the government tries to combat that inequality. Cowen estimates that perhaps 15 percent of workers will thrive, with plenty of disposable income. There will be intense competition for these people’s attention. They will favor restaurants, hotels, law firms, foundations and financial institutions where they are greeted by someone who knows their name. People with this capacity for high-end service, and flattery, will find work.

Economizers. The bottom 85 percent is likely to be made up of people with less marketable workplace skills. Some of these people may struggle financially but not socially or intellectually. That is, they may not make much running a food truck, but they can lead rich lives, using the free bounty of the Internet. They could use a class of advisers on how to preserve rich lives on a small income.

Weavers. Many of the people who struggle economically will lack the self-motivation to build rich inner lives for themselves. Many are already dropping out of the labor force in record numbers and drifting into disorganized, disaffected lifestyles. Public and private institutions are going to hire more people to fight this social disintegration. There will be jobs for people who combat the dangerous inegalitarian tendencies of this new world.

Selling Secrets of Phone Users to Advertisers – by Claire Cain Miller and Somini Sengupta

SAN FRANCISCO — Once, only hairdressers and bartenders knew people’s secrets.

Now, smartphones know everything — where people go, what they search for, what they buy, what they do for fun and when they go to bed. That is why advertisers, and tech companies like Google and Facebook, are finding new, sophisticated ways to track people on their phones and reach them with individualized, hypertargeted ads. And they are doing it without cookies, those tiny bits of code that follow users around the Internet, because cookies don’t work on mobile devices.

Privacy advocates fear that consumers do not realize just how much of their private information is on their phones and how much is made vulnerable simply by downloading and using apps, searching the mobile Web or even just going about daily life with a phone in your pocket. And this new focus on tracking users through their devices and online habits comes against the backdrop of a spirited public debate on privacy and government surveillance.

On Wednesday, the National Security Agency confirmed it had collected data from cellphone towers in 2010 and 2011 to locate Americans’ cellphones, though it said it never used the information.

“People don’t understand tracking, whether it’s on the browser or mobile device, and don’t have any visibility into the practices going on,” said Jennifer King, who studies privacy at the University of California, Berkeley and has advised the Federal Trade Commission on mobile tracking. “Even as a tech professional, it’s often hard to disentangle what’s happening.”

Drawbridge is one of several start-ups that have figured out how to follow people without cookies, and to determine that a cellphone, work computer, home computer and tablet belong to the same person, even if the devices are in no way connected. Before, logging onto a new device presented advertisers with a clean slate.

“We’re observing your behaviors and connecting your profile to mobile devices,” said Eric Rosenblum, chief operating officer at Drawbridge. But don’t call it tracking. “Tracking is a dirty word,” he said.

Drawbridge, founded by a former Google data scientist, says it has matched 1.5 billion devices this way, allowing it to deliver mobile ads based on Web sites the person has visited on a computer. If you research a Hawaiian vacation on your work desktop, you could see a Hawaii ad that night on your personal cellphone.

For advertisers, intimate knowledge of users has long been the promise of mobile phones. But only now are numerous mobile advertising services that most people have never heard of — like Drawbridge, Flurry, Velti and SessionM — exploiting that knowledge, largely based on monitoring the apps we use and the places we go. This makes it ever harder for mobile users to escape the gaze of private companies, whether insurance firms or shoemakers.

Ultimately, the tech giants, whose principal business is selling advertising, stand to gain. Advertisers using the new mobile tracking methods include Ford Motor, American Express, Fidelity, Expedia, Quiznos and Groupon.

“In the old days of ad targeting, we give them a list of sites and we’d say, ‘Women 25 to 45,’ “ said David Katz, the former general manager of mobile at Groupon and now at Fanatics, the sports merchandise online retailer. “In the new age, we basically say, ‘Go get us users.’ “

In those old days — just last year — digital advertisers relied mostly on cookies. But cookies do not attach to apps, which is why they do not work well on mobile phones and tablets. Cookies generally do work on mobile browsers, but do not follow people from a phone browser to a computer browser. The iPhone’s mobile Safari browser blocks third-party cookies altogether.

Even on PCs, cookies have lost much of their usefulness to advertisers, largely because of cookie blockers.

Responding to this problem, the Interactive Advertising Bureau started a group to explore the future of the cookie and alternatives, calling current online advertising “a lose-lose-lose situation for advertisers, consumers, publishers and platforms.” Most recently, Google began considering creating an anonymous identifier tied to its Chrome browser that could help target ads based on user Web browsing history.

For many advertisers, cookies are becoming irrelevant anyway because they want to reach people on their mobile devices.

Yet advertising on phones has its limits.

For example, advertisers have so far had no way to know whether an ad seen on a phone resulted in a visit to a Web site on a computer. They also have been unable to connect user profiles across devices or even on the same device, as users jump from the mobile Web to apps.

Without sophisticated tracking, “running mobile advertising is like throwing money out the window. It’s worse than buying TV advertisements,” said Ravi Kamran, founder and chief executive of Trademob, a mobile app marketing and tracking service.

This is why a service that connects multiple devices with one user is so compelling to marketers.

Drawbridge, which was founded by Kamakshi Sivaramakrishnan, formerly at AdMob, the Google mobile ad network, has partnerships with various online publishers and ad exchanges. These send partners a notification every time a user visits a Web site or mobile app, which is considered an opportunity to show an ad. Drawbridge watches the notifications for behavioral patterns and uses statistical modeling to determine the probability that several devices have the same owner and to assign that person an anonymous identifier.

So if someone regularly checks a news app on a phone in bed each morning, browses the same news site from a laptop in the kitchen, visits from that laptop at an office an hour later and returns that night on a tablet in the same home, Drawbridge concludes that those devices belong to the same person. And if that person shopped for airplane tickets at work, Drawbridge could show that person an airline ad on the tablet that evening.

Ms. Sivaramakrishnan said its pinpointing was so accurate that it could show spouses different, personalized ads on a tablet they share. Before, she said, “ad targeting was about devices, not users, but it’s more important to understand who the user is.”

Similarly, if you use apps for Google Chrome, Facebook or Amazon on your cellphone, those companies can track what you search for, buy or post across your devices when you are logged in.

Other companies, like Flurry, get to know people by the apps they use.

Flurry embeds its software in 350,000 apps on 1.2 billion devices to help app developers track things like usage. Its tracking software appears on the phone automatically when people download those apps. Flurry recently introduced a real-time ad marketplace to send advertisers an anonymized profile of users the moment they open an app.

Profiles are as detailed as wealthy bookworms who own small businesses or new mothers who travel for business and like to garden. The company has even more specific data about users that it does not yet use because of privacy concerns, said Rahul Bafna, senior director of Flurry.

Wireless carriers know even more about us from our home ZIP codes, like how much time we spend on mobile apps and which sites we visit on mobile browsers. Verizon announced in December that its customers could authorize it to share that information with advertisers in exchange for coupons. AT&T announced this summer that it would start selling aggregated customer data to marketers, while offering a way to opt out.

Neither state nor federal law prohibits the collection or sharing of data by third parties. In California, app developers are required to post a privacy policy and to clearly state what personal information they collect and how they share it. Still, that leaves much mystery for ordinary mobile users.