GURU INTERVIEW: REVA BASCH, SUPER-SEARCHER
Reva Basch, Aubergine Information Services, http://www.well.com/user/reva, Author, Researching Online For Dummies
and Secrets of the Super Net Searchers, as well as
Reva's (W)rap Columnist, ONLINE magazine
Marylaine: How do you stay current with new developments in research and
technology? What do you read, what sites do you routinely visit, what list
serves or discussion groups, etc.?
Reva: I subscribe to several e-newsletters and daily or semi-weekly news updates.
Years ago, I signed up for half a dozen or so publications in HTML through
Netscape's Inbox Direct. I've dropped some of them, but I'm still getting
Wired News and C|Net News, as well as the New York Times' Technology
update. I also get half a dozen or so newsletters in ASCII, including
Edupage, NewsScan (a spinoff by the former editors of Edupage), Bob
Seidman's Online Insider, and a very interesting one called The Rapidly
Changing Face of Computing, put out by a fellow named Jeff Harrow (I think;
I don't have a copy at hand to verify) at Compaq.
Of course I subscribe to Danny Sullivan's Search Engine Report; that
reminds me periodically to go take a look at Greg Notess' site --
http://www.notess.com -- and occasionally I go back to Danny's Search Engine Watch -- http://searchenginewatch.com/ -- site for more detailed information on something he's written about. I also
read Outsell's e-Brief for news about the online industry.
One pub that I enjoy just for fun is Netsurfer Digest -- http://www.netsurf.com/nsd/ -- it covers some weird
and/or interesting sites in an intelligent, funny, and non-hyped way. I
used to subscribe to Net-Happenings -- http://scout.cs.wisc.edu/caservices/net-hap/index.html -- but I just couldn't keep up. Same
thing with BUSLIB-L; the volume is just so great that it quickly gets out
of hand. I actually don't follow many listservs anymore; the
signal-to-noise ratio is so low on many of them. I do subscribe to a
computer book writers list, and to my professional association's listserv,
AIIP-L, which is restricted to members of the Association of Independent
Information Professionals.
I pick up a lot of information about new sources and technologies on The
WELL, an online community I've been a part of since 1988. Folks there are
exceedingly well informed about both current technologies and emerging
trends; in fact, a lot of trend-MAKERS hang out there, and you can
eavesdrop on their conversations, so to speak, or pick their brains
informally.
As for print pubs, I read Online and Database (now eContent), Searcher, Information Today, and the CyberSkeptic's Guide to Internet Research, as
well as PriceWatcher, Bibliodata's new newsletter about online pricing. I
look at The Information Advisor newsletter; I used to be a contributing
editor. I still read WIRED, though it no longer feels, to me, like it's on
the bleeding edge of technology. I look at Upside for the Silicon Valley
business perspective, and Brill's Content, which covers media issues in
general but devotes a considerable chunk of space to the web and electronic
content. I used to subscribe to Fast Company and The Industry Standard, but
dropped them both -- information overload. I also look at the Special
Libraries Association's monthly Information Outlook, and at a Canadian
journal called Information Highways. I'm sure I'm forgetting something!
Marylaine: In overseeing your new series of Super Searcher books, what are the most interesting things you've learned from the Super Searchers?
Reva: It's hard to summarize. The first book in the new series, Super Searchers Do Business, by Mary Ellen Bates, is about business searching and was just
published in June. The second one, by T.R. Halvorson, an attorney and legal
researcher, is called Law of the Super Searchers and will be out in the
fall. We have titles on finance and investment, medical and healthcare
information, and news and current events lined up after that. Information
Today, Inc. is the publisher, and they're very excited about and extremely
supportive of the series.
I'd say that the single most interesting thing I've learned from the "new"
super searchers so far is that -- despite the rise of the web and all the
other technological changes that the web has brought about, not to mention
the tremendous expansion in content and in our options for accessing that
content -- the skills required to be a successful researcher really have
not changed. It still takes creativity, above all, a flexible approach to
problem-solving, a good command of language, the ability to discern subtle
connections and to make intuitive leaps instead of just proceeding down an
orderly, linear path. Those skills -- or maybe they're characteristics one
is born with -- still define a virtuoso searcher, as they did when I
published the original Secrets of the Super Searchers in 1993, and Secrets
of the Super Net Searchers in 1996. I feel strongly that they will continue
to do so. Yes, you can take training and learn on the job, but to be more
than a merely competent researcher -- to be an INSPIRED one -- you really
have to have it inside you. It isn't something you learn.
Marylaine: Of all the new developments in search technologies, which ones
do you think librarians need to pay most attention to?
Reva: Natural language querying and search processing, XML and other meta-data
schemes, and whatever enhancements the next generation of search engines
comes up with. We're seeing a lot of differentiation among search engines
today, especially in how they present the data to us. Northern Light -- http://www.northernlight.com/ -- with
its Custom Folders is just one example. I'm also interested in new
algorithms for retrieving and ranking search results. With Boolean
searching, we usually defaulted to date, most recent first. Web engines
generally rank by relevance. Now we see experiments in collaborative
filtering, where the position of an item on your hit list is determined by
what other people thought of that resource, or how many other sites
(especially sites generally regarded as important or authoritative) link to
it, or its popularity as measured by the amount of traffic to it. It's a
fascinating idea, and worth keeping an eye on.
Marylaine One of the things you talked about at NYLINK is the visual display of information. Do you think librarians should be doing more with visual display to enhance our information services?
Reva: In the best of all possible worlds, of course. Presenting complex data and
the relationships between documents and other kinds of information in a
visual way is something that most people find intuitive. You can absorb
massive amounts of information -- or information ABOUT information -- by
envisioning it in the form of a grid, a branching tree structure, a
topographic map, a set of objects in a three-dimensional space, or
whatever. Imagine displaying your subject collection that way; patrons
could see at a glance that you're strong in the classics, say, or have a
terrific biomedical collection. Focus down a bit and they could tell that
you've got an extensive collection of metaphysical poetry but aren't that
strong in Shakespeare. You can see the possibilities. Visualization
software is being used today for proprietary knowledge management
endeavors; there's no reason, other than budget, bandwidth, and the
learning curve, not to implement it in a library setting. It's a natural --
but, as I said, "in the best of all worlds..."
Marylaine: In a world where patrons want and expect full-text when they sit
down at a computer, what do you think will happen with traditional databases
which have only citations and abstracts?
Reva: And indexing, too, I assume. That's such an interesting question, because
abstract-and-index databases add so much control and precision to
searching, and do so much to streamline the evaluation of search results. I
started life -- my professional life, anyway -- as an engineering
librarian. I loved to search Ei Compendex, NTIS, Inspec, all those
technology databases. But if a database record you're interested in stops
with a cite and an abstract, you're faced with the document delivery
problem. As your question implies, that's archaic. I think the solution
lies in hybrid databases where you can elect to do a controlled vocabulary
search or confine your search to the abstract where the most important
concepts are likely to appear, then search the full text if you haven't
found what you want. In any event, the full text should be there, or a
hyperlink away, whether on the web or on a CD-ROM or wherever.
Marylaine: Do you think publishers will continue to offer small, highly
targeted databases, or do you think the future belongs to large aggregated
databases?
Reva: Your questions are so good! I still mourn the demise of Coffeeline on
Dialog. If small, highly targeted databases die out, it won't be for lack
of interest or utility, but because of economics and the fiercely
competitive struggle for attention in today's information marketplace.
Important, research-intensive segments of the economy -- biomedical
researchers, chemists, financial analysts and investment bankers, for
example -- are well served by specialized information providers using
systems and software that no general vendor of aggregated databases could
possibly match. For now, at least, although there are signs of aggregation
on the web, the nature of the beast is working against it. What I think
MIGHT happen is that search engines -- or maybe bots, software entities
that we program to keep abreast of our research interests -- will become so
sophisticated that we can present them with our research request and
they'll go out and check all the appropriate databases, small and large,
aggregated and un-, using whatever query language each individual database
understands, and taking advantage of all the special features they offer,
and return to us with the answer or data sets we need: Voila! But then,
I've always been a technology optimist.
Marylaine: Reva, thank you so much. I learned a lot from you at Nylink and here, and I wanted to share that with my readers.