The Traditional Future - O'Reilly Radar - 0 views
-
Lisa Spiro on 15 Jan 09As anyone who has worked in optimization recently knows, stripping the randomness out of a computing system is a bad idea. Harnessing randomness is what optimization is all about today. (Even algorithms designed for convergence make extensive use of randomness, and it is clear that library research in particular thrives on it.) But it is evident that much of the technologization of libraries is destroying huge swaths of randomness. First, the reduction of access to a relatively small number of search engines, with fairly simple-minded indexing systems -- most typically concordance indexing (not keywords, which are assigned by humans) -- has meant a vast decrease in the randomness of retrieval. Everybody who asks the same questions of the same sources gets the same answers. The centralization and simplification of access tools thus has major and dangerous consequences. This comes even through reduction of temporal randomness. In major indexes without cumulations - the Readers Guide, for example - substantial randomness was introduced by the fact that researchers in different periods tended to see different references. With complete cumulations, that variation is gone.