|
Worth: |
vol. 4, #18, |
WHAT YOU DO WITH FALLING APPLES
The Chronicle of Higher Education reports that a professor has developed a computer program to grade student essays for him. He says that when he feeds the computer examples of good essays, it can deduce general principles of good writing and apply them. Since he now doesn't have to spend a lot of time grading the papers, he can ask the students to do much more writing than he'd ever asked of them before, so his computer program actually improves the students' writing as well, he says.Ain't technology wonderful?
The only thing is, I doubt computers can recognize good writing. Good spelling, yes. Even good grammar. It would presumably mark me down for the fact that my last two statements were sentence fragments. But will it recognize contradictions and bad logic? Will it notice that an essay rambles on endlessly and says nothing? Will it realize that some ideas are totally unsupported by evidence? For that matter, will it recognize great writing, or original ideas?
I also doubt a computer is going to recognize when a student is wrong but brilliant, like the young man who totally missed the point of my assignment on Dante's Purgatory. When I asked students to describe their own personal nine circles of hell, I expected them to tell me what they felt were the most forgivable and least forgivable sins. But this student described hell as a place much like campus registration in that era, a large gymnasium, where people sat at tables, handing out punch cards for each virtue and vice, and you went around from table to table, collecting all the punch cards that applied to you, and turned them in to the computer, which sorted them and assigned you to your appropriate place in hell (unless you had neither sins nor virtues to your credit, in which case you were doomed to circle the gymnasium forever).
And yet I'm willing to bet that students graded by computers will accept those grades, will not argue them as they do argue when professors complain about spelling that had been approved by a spell-check program, or when they criticize information students derived from computers. We have an overwrought reverence for technologies we do not understand. Arthur C. Clarke said that any technology sufficiently complex is indistinguishable from magic, and that is how we treat computers--as magic, as godlike, as incapable of error.
And certainly there are things computers do better than our minds, like sorting through large amounts of data. Nor are we good at remembering details--most of us have no gift for precision and exactness. If we start counting to a million, long before we get there, our minds are haring off after something more interesting. We aren't all that good at calculating either.
That's why when we have large amounts of information to store and manipulate, we entrust it to computers. It's easy to remember that you're allergic to aspirin, and therefore always make sure your medications don't contain it. But if you're taking 10 different drugs, you can't possibly remember how each one reacts with every other drug. We count on pharmacists to check the drug interaction database for us before they fill our prescriptions.
But that only works if somebody first feeds the data in and tells the computers HOW to sort it. Years ago my husband and I typed punch cards for every piece of music owned by our campus radio station, feeding in the titles, format and length, so that the computer could then randomly program music for one, two, or three hour time blocks. The idea was that the computer, by removing human prejudices from the music selection process, would guarantee a pleasing variety of music, and make sure that ALL the music, popular or obscure, would get a fair hearing.
What we had not counted on was the computer developing an absolute passion for Holst's "The Planets," and Berlioz' "Symphonie Fantastique," each exactly 57.5 minutes long. It treated us to a month long festival of Mahler and Bruckner symphonies, which also ran about an hour. On one memorable occasion, it gave us three performances of "Symphonie Fantastique" in a row. The computer understood length, and nothing about what people respond to in music. We had to refine our instructions to it, tell it to vary the lengths of pieces it used, vary the formats it chose.
This is why I am deeply suspicious of using computers for anything the human mind is better at, such as decisionmaking, or anything requiring imagination, judgment, expert knowledge. If a service provider wants to limit bawdiness online, it can program its servers to block certain words, as AOL did when it excluded the word "breast" from use in names of discussion groups. But machines do not understand context, so they also blocked "breast cancer" as a discussion topic (since AOL had not thought to block the word "hooter," the women simply formed "hooter cancer" discussion groups until the humans at AOL came to their senses).
Vendors like Amazon and producers of search engines understand the limits of machine intelligence, and are trying to quantify and harness human judgment to improve their searching power and recommendations. HotBot analyzes which sites most of its previous users have chosen for your topic, and invites you to examine those "Direct Hits"--a useful service in some cases, though not if you are simply following the choices of the clueless. Amazon makes recommendations based on what other books were ordered by people who bought the same books you chose.
How well does it do? I have bought from Amazon books of columns by Bob Greene, Dave Barry and Bailey White, books on politics, on history, on the social consequences of technology, and on making our cities better. I have bought novels by Stephen King, Barbara Paul and Jon Katz. And it recommends for my reading pleasure Catch-22, The Old Man and the Sea, two novels by John Grisham, and The Hunt for Red October. Huh? As for nonfiction, it thinks I'm going to enjoy curling up with Guide to Methods for Students of Political Science, not to mention Democracy and the Global Order. I think I'll pass.
The problem again is that computers quantify people's choices without having a clue WHY they made those particular choices. But if you said to any librarian, "I like Nora Roberts' books; what else would I like?" we wouldn't recommend just any best-selling romance novels; understanding that Roberts' fans like her bright funny heroines, manly and amusing heroes, good dialogue, and great sex, we would recommend romances that shared those qualities.
Over millions of years of evolution, the members of our species who survived were the ones whose brains recognized patterns: beware of large yellow animals with manes, don't eat toadstools, use bark from this kind of tree to relieve fever. But the survivors also learned to make distinctions between things that appeared similar but were not--certain berries were good, but others were poisonous. Our brains became good at generalizing from knowledge on one hand, and refining categories of knowledge on the other.
That's why the computer makes an ideal partner for our minds--it can zero in on details when our minds are focusing on grand design. It can sort and manipulate the numbers, perceive patterns where we cannot because the numbers are too large for us to grasp. It can compute the number of apples that have ever fallen, and even, given a video capability, sort them.
But we, from the fall of apples, can imagine apple pandowdy and deduce the existence of gravity. We are the ones who define what the problem is, what kind of data needs to be gathered, and how it is to be sorted. We tell computers which kinds of data to correlate, like, for example, a congressman's voting record on tobacco issues with donations he received from the tobacco lobby. And when the machine spits out its answers, we are the ones who interpret the data, map it, give it meaning, and act on it.
What alarms me is when we try to use technology not to supplement human intelligence, but to replace it. It's clear that for many politicians, the great hope of computers in schools is that they can replace those pesky, expensive teachers. They salivate over the savings to be gained by delivering courses by web and cable to thousands of students instead of to just one classroom per teacher. Corporate medicine hopes that computers can read lab results as well as humans, that databases of diseases and symptoms can diagnose a patient's condition as well as doctors, and that counselor programs like ELIZA can replace psychologists. In a society that values efficiency more than it values human beings, the computer is a dangerously attractive solution.
That's why I am hoping that skilled programmers can come up with expert programs that can perform all the skills of politicians, CEOs, and financial analysts. Call one GATES, let's say, and another, perhaps, GREENSPAN. Do you suppose this might give our leaders a new awareness of the limits of our intelligent tools, and the unique value of the intelligence, experience, and, even, wisdom of human beings?
My Word's |
Current column |
Marylaine.com/ |
NOTE: My thinking is always a work in progress. You could mentally insert all my columns in between these two sentences: "This is something I've been thinking about," and "Does this make any sense to you?" I welcome your thoughts. Please send your comments about these columns to: marylaine at netexpress.net. Since I've written a lot of these, some of them many years ago, help me out by telling me which column you're referring to.
I'll write columns here whenever I really want to share an idea with you and can find time to write them . If you want to be notified when a new one is up, send me an e-mail and include "My Word's Worth" in the subject line.