Personal Knowledge Management : Who, What, Why, When, Where, How?
Jason Frand and Carol Hixon
December, 1999
Our students, who will spend most of their working lives in the 21st century, will need to see the computer and related technologies as an extension of themselves, as a tool as important as the pencil or quill pen was for the last several hundred years. Fifteen years ago, few people knew what a personal computer was. Now personal computers are ubiquitous. With the proliferation of personal computers and linked computer networks, there has been an increase in the amount of information produced, as well as new avenues of finding the information. Personal Knowledge Management (PKM) attempts to utilize the computer to help the individual manage the information explosion in a meaningful way.
What is personal knowledge management? It’s a system designed by individuals for their own personal use. Knowledge management has been described by Davenport and Prusak as a systematic attempt to create, gather, distribute, and use knowledge. (a) Lethbridge characterizes it as the process of acquiring, representing, storing and manipulating the categorizations, characterizations and definitions of both things and their relationship. (b) PKM, as conceived at the Anderson School, is a conceptual framework to organize and integrate information that we, as individuals, feel is important so that it becomes part of our personal knowledge base. It provides a strategy for transforming what might be random pieces of information into something that can be systematically applied and that expands our personal knowledge.
For whom is PKM designed? It was initially geared toward UCLA MBA students. It has since been introduced to corporate managers who have found it useful. And in the past year several dot-coms have emerged offering PKM type tools. We believe that it is generalizable to anyone in any field.
Why is PKM needed? There has been a proliferation in the amount of information available, both from traditional print publishing sources and from electronic resources, particularly on the World Wide Web. Traditional print publications have increased significantly in the last few years. Scholarly journals increased approximately 55% between 1987 and 1996 and it’s estimated that there are more than 30,000 new journals each year and the number of pages in existing journals has also increased. The number of scholarly monographs published by the members of the American Association of University Presses increased 67% from 1985-1995. Non-scholarly and foreign print publications also increased at similar rates. (1) (d)
The proliferation of Web-based information has increased even more dramatically. The electronic information age has been fueled by a number of factors: 1) the evolution of micro processors. Recent conservative predictions estimate that by the year 2005 micro processors would quadruple the capacity of today’s high-end machines. However, since those predictions, IBM and Motorola have recently discovered how to make computer chips from copper, a discovery that is immediately expected to double processor speeds. The point is that we are still in the middle of the computer revolution and that predictions about speed and memory capacity will be obsolete almost as soon as they’re made; 2) the cost of disk storage continues to decrease. It has been estimated that the cost of disk storage will continue to decrease 60% yearly. The costs of storing traditional print-based information continues to rise at the same time that the cost to store digital information continues to decrease. The book is no longer the low-cost medium. A terabyte of disk storage (1000 gigabytes) will store 1,000,000 full text books. The cost of such digital storage in 2004 is predicted to be $100 - 100 books per penny; 3) the growth in PCs and PC Internet access. The number of PCs has doubled since 1994. Even more significant, however, is that the percentage of those PCs that are network ready (able to hook up to the Internet) is expected to increase from 9% of PCs in 1994 to 58% in 1999. The only check to this continued expansion is telecommunication capacity which hasn’t kept up with the predictions for its growth; (2) 4) Internet information growth. The most amazing growth is occurring on the Web. In January 1995, according to the Lycos search engine database, more than 2 million WWW documents were published online. Two years later, Lycos had to keep track of over 34 million URLs, with multiple documents available at each URL.(3) Alta Vista noted that in 1995 the Web contained 50 million pages on 100,000 sites. By 1997 it estimated that the web contained between 100, and 150 million pages on 650,000 sites. There are estimated to be more than 1000 new Web sites a day.(4) At a recent (July 1998) Knowledge Access Management Institute held at UCLA, OCLC (the world’s largest provider of bibliographic databases), estimated that the amount of information - not knowledge and not necessarily unique information - available on the Internet in the year 2001 will be greater than all knowledge in recorded history. But herein lies the dilemma: what is the relevance of mere digital size (terabytes of digital information) to the value of the content? What is the ratio of total volume of networked information to information useful to scholars - or to anyone?
One of the most significant differences between paper-based publishing and Web-based publishing is that of evaluation of content. In paper-based publishing, the dominant model is one where there are editors or editorial boards who evaluate the content of what is published. A person picking up a book on management from HBS (or any other traditional) press could depend upon the scholarship and the overall accuracy of the information presented - they at least knew that its content had been scrutinized and evaluated to some degree. They knew that they were acquiring knowledge, not just information. In Web-based publishing, the dominant model is one where there is not an editor or an editorial board. There are some electronic publications that are attempting to mirror the content evaluation that prevailed in the paper-based information world, and some "reviewed" journals. But they are being overwhelmed by the commercial enterprises and by individuals with Internet access who just publish their opinions as if they were factually based. As T. Matthew Ciolek notes, "The Web is the global sum of the uncoordinated activities of several hundreds of thousands of people who deal with the system as they please. It is a nebulous, ever-changing multitude of computer sites that house continually changing chunks of multimedia information… If the WWW were compared to a library, the "books" on its shelves would keep changing their relative locations as well as their sizes and names. Individual "pages" in those publications would be shuffled ceaselessly. Finally, much of the data on those pages would be revised, updated, extended, shortened or even deleted without warning almost daily." He further notes that "The present body of the WWW is determined largely by the developers’ hunger for recognition and applause from their peers…. Those with access (and copyright) to ample and high-quality factual and/or scholarly materials are in the minority." (3) The tools for retrieving information have become more sophisticated at the same time that the content has become either less sophisticated or more biased. One of the foremost information professionals of our time, Michael Gorman, Dean of Library Services at California State Fresno, has stated: "It is perfectly possible to walk into a major research library containing millions of volumes and locate a desired text or part of a text within minutes. This everyday occurrence may seem humdrum, but it is beyond the wildest dreams of any Mosaicist or Web-ster. The result of this activity, moreover, is access to a high-quality text or graphic that is secure in its provenance and instantly useable. (5)
There are any number of Internet search engines available now. The search engines seem to be proliferating almost as quickly as the sites they set out to "index". Some of the more common search engines are: Yahoo, Alta Vista, Webcrawler, Lycos. Magellan, Hotbot, Infoseek. Metacrawler, Freep. There are also dozens (perhaps hundreds) of Web sites that evaluate the effectiveness of the various search engines. Some of these sites are mounted by libraries, some by companies, some by individuals. A quick sampling of the so-called evaluative sites reveals the following: each engine uses different search syntax, each engine uses some type of relevance ranking, each engine indexes only a small number of the total number of Web pages available, each search engine retrieves different results for the same search, and no single search engine satisfies everyone.
In the June 28, 1997 issue of New Scientist, David Brake points out that all of the search engines cover fewer than half the pages available on the Web and the engines are falling farther and farther behind. The companies that run the search engines (and they are all for-profit companies) are not expanding the size of their databases to try to keep pace with the volume of documents appearing on the Web. Instead, they are providing a sampling of the Web. For instance, Alta Vista attempts to provide a sample from every Web site. It tries to index the majority of the content only on the most-frequently-visited Web sites. Infoseek adopts a similar approach, indexing 25-30 million pages of text, with about 90% of all its queries being answered using the most frequently accessed 1 million pages. More than 90% of its indexed pages are never accessed. All search engines work in a similar fashion, sending out programs called "spiders" to scan and catalog the Web automatically by following the links between documents. The pages are indexed by key words and stored in huge databases. Generally, a site will not be picked up by a spider unless it has been linked to by another site or its owner registers it manually with the search engine. Some sites are not open to the general search engines, requiring registration and/or payment before access to the information is permitted. Such sites often provide their own site-specific search engines. Search engines may come to rely increasingly on indexing "meta data", the data about the data that is embedded in the documents themselves - brief descriptions about the contents of the pages, rather than trying to index the full text of the documents. The problem with using meta data for indexing is that it relies on Web publishers to describe their Web pages accurately. Web publishers recognize the way that their meta data is being used and many are currently stuffing the met data fields of their pages with a small number of key words over and over again in an attempt to ensure that search engines place the page high up in the list of hits it displays to the user. There is also no guarantee that the publisher will represent the content accurately. For instance, a recent search by one of the authors to locate information on the Holocaust Memorial Museum in Washington, D.C. turned up in one of the first ten hits a very strongly anti-Semitic diatribe against the museum. (4)
Indexing of the Web has no standards and is ruled by commercial firms. In this respect, Web indexing differs widely from indexing of paper-based publishing. Indexing of paper-based publishing has been accomplished largely by libraries or vendors that serve libraries. One of the tenets of libraries is to provide unbiased and accurate evaluation of the content of the materials. They have also developed a set of common principles and standards for describing the materials they own. The Web is largely self-indexed, as well as self-published. There are no such standards or principles in the current ungoverned world of the Web.
In the 1986 publication Overload and Boredom : Essays on the Quality of Life in the Information Society, Orrin Klapp outlines the attrition of meaning that often accompanies the vast accumulation of information. Information overload relates not just to the growing volume of information with which we must all deal, but also to the degradation of that information because of redundancy and noise. We live in a society where we are continually bombarded by media. Everyone must "listen" to a great deal of noise in order to retrieve the few bits of information that are of value to them. Much information is also redundant and must be discarded or ignored for that reason. Trying to clear a path of meaning through the jungle of information is becoming increasingly difficult for all of us. The volume of information has increased so much that we now struggle to keep track of and retrieve for later use those bits of information that we have already identified as being personally useful.
The following chart outlines the changes that have accompanied the shift from traditional print-based publishing to Web-based information:
Traditional | Web | |
Cost of production | High | Low |
Cost of updating | Very high | Relatively low |
Cycle time | Years | Hours |
Distribution | Physical | Electronic |
Number of producers | Controlled | Unlimited |
Editorial review | Prior to publication | Essentially none |
Content evaluation | By professional | By users |
What this chart shows is a clear shift in responsibility from a library (which selects materials based on qualitative criteria) or a publisher (which employs professionals to evaluate content) to the individual to evaluate content in a far more fundamental way than was ever expected with traditional published information.
When is PKM needed? It must become part of a routine and used whenever working with information and knowledge in the processes of creating, acquiring, evaluating, assessing, organizing and storing, cataloging and indexing, and retrieving from personal memory (whether from your mind or computer storage).
Where is PKM needed? In dealing with paper documents, electronic documents, Web bookmarks, or one’s home library. One schema can be made to work for all.
How is PKM implemented? Individuals must initiate a process of developing a mental map of the knowledge with which they work. They do this by creating an organizational structure to facilitate the finding and relating of personal and professional information. They can use the storage capacity (hard disk) of the personal computer as a tool for initiating these processes. At the Anderson School of Management at UCLA, we present a strategy for integrating personal
aspirations, career objectives, and educational experiences that is called the Anderson Edge. In the recent past, the role of educators was to guide people to mines of data, show them how to uncover their own veins of information, then teach them how to sift through the rocks and debris and identify the true nuggets of knowledge. Now educators find themselves having to provide students with the means to keep from being buried alive in the never-ending avalanche of information, while still managing to grab the nuggets, store them safely, and retrieve them for use when needed.
To understand the process, we need to examine the concepts of knowledge and personal knowledge management in greater detail. We have to examine how this relates to the laptop computer requirement that the Anderson School has for all its students. We live in a sea of data. Our challenge is knowledge and its management. This is not a new concept, even though we may feel it more acutely now. In T.S. Eliot’s 1915 poem Choruses from "The Rock", he said:
Where is the Life we have lost in living?
Where is the wisdom we have lost in knowledge?
Where is the knowledge we have lost in information?
Where is the information we have lost in data?
So, we begin with data, add context to get information, add understanding to get knowledge, and add judgment (values) to get wisdom.
Stephen Jones and Peter Thomas looked at personal information management tools in a 1997 article (e) and noted that 60% of their study sample utilized "to do" lists, 45% used calendars, 45% used address books, 40% used personal organizers, 40% used desk diaries, 35% used pocket diaries, 15% used appointment books, and less than 10% used personal digital assistants - the only computer-assisted information management system included in the study. We’re seeking to teach people to use the computer to manage their personal knowledge, as opposed to information.
There is no generally agreed-upon definition for knowledge. Sveiby lists some attributes of knowledge that (f) help in the understanding of it: it’s an unlimited resource - one never runs out of the raw materials; knowledge grows from sharing (and the giver of knowledge frequently becomes even more knowledgeable through the process); and communication and personal chemistry are critical in the knowledge process. Nonaka and Takeuchi (g) note that there are two types of knowledge: tacit (subjective) knowledge and explicit (objective) knowledge. Tacit knowledge is the knowledge built on experience. It includes insights, intuitions, and hunches and is not easily visible and expressible. It is highly personal and is hard to formalize and share with others. Explicit knowledge is formal and systematic and can be expressed easily in words and numbers. It is the knowledge of rationality and is easily communicated and shared in the form of hard data, formulae, codified procedures, or universal principles. The following chart show knowledge spiral chart developed br Nonaka and Takeuchi.
This chart represents the knowledge transfer process as a spiral, starting off with passive (tacit) knowledge that is externalized in the process of trying to articulate it to someone else. We connect these explicit ideas to the existing body of knowledge, combining them and internalizing them, making them tacit once again. An example of this process would be an experienced cook who intuitively knows how to create a new dish based on years of experience. She tries to teach a novice, externalizing her knowledge of food chemistry, communicating it in the form of a recipe. The novice takes the recipe, compares it to other recipes available in books and experiments with it, internalizing it. The new cook then passes her tacit knowledge on by sharing the recipe and her technique (externalizing) with another cook. And so the spiral goes on and on.
How does one manage the never-ending process of acquiring, storing, sharing, growing knowledge? Larry Prusak, Managing Partner of IBM Global Services Consulting, realizes that it isn’t really possible to manage knowledge, noting that, "What a company can do is manage the environment that optimizes knowledge."
In this paper, we’re looking at knowledge in an academic context. The traditional view of knowledge in the academic environment is that knowledge is a product, it’s something you can go to school and acquire. In the university environment the goal is knowledge: the acquisition of it through study and teaching. In the traditional environment, the classroom was for teaching and learning and the library was for preservation, organization, and circulation.
The evolving view of education is that learning is something that has to occur throughout our lives. In this view, knowledge is much more a process of acquisition, testing, evaluation and integration. De Long (g) notes that effective knowledge management is a result of the "fit" between the university environment and culture, the expectations of a particular class, and the individual’s competencies. In this environment, there are three major pieces to consider: the university culture, the classroom presentation (content), and the individual student’s information handling skills.
In the emerging university environment, there is a blurring of roles and responsibilities. There are, in reality, divergent goals among faculty, staff and students: get a degree, teach classes, conduct research, get a paycheck. The course mentality still dominates: students take courses, not a curriculum; faculty teach their classes. There is a lack of coordination within and between areas and integration is left to the learner.
In content and presentation, there are ideas that are novel, not easily understood, difficult to categorize and relate to other ideas. There are multiple ways of looking at ideas and a difference of opinion among faculty across disciplines and within a discipline. Relating concepts is left to the learner.
The individual’s information handling skills are being stretched in the new computer-intensive environment. The learner has the ability to create new information sources or dramatically redesign existing ones. With information technology, there is the capability of automating processes and, at the same time, providing insights into the processes themselves. The application of these skills is left to the learner.
If students and teachers continue to approach the educational experience using the same old approaches and techniques, will investing in information technologies make any difference? What, if anything, do faculty and students need to do differently in order to get value from our investment in information technologies? One component involves personal knowledge management.
Knowledge management presents some challenges. Some problems appear to be intrinsic to knowledge management, whether it is being performed using a word processor, a formal language-based tool or pencil and paper. These problems include: the issues of categorizing or classifying, the issue of naming things and making distinctions between them, the issue of evaluating and assessing. (i) At the Anderson School of Management, we have developed a workshop for MBA students called the Anderson Edge. The first step of this is teaching skills to help manage information. Knowledge management builds on information management.
The Anderson Edge consists of teaching students certain knowledge management principles (borrowing heavily from traditional library science) and training them to apply them using their laptop computers. The heuristics we present to them include: searching/finding, categorizing/classifying, naming things/making distinctions, evaluating/assessing, and integrating or relating. For searching and finding we provide them with what we call "launch pads". Launch pads are a set of resources organized by the Library in consultation with the School. Some of these are general and some are discipline-specific.
- There is a database selection tool for organized information sources that helps students to select appropriate starting points based on the characteristics of the data.
- There is the Internet Launch Pad for Web sources that demonstrates that different search engines have different value and attributes.
- Then there are course strategy pages for specific analysis (e.g.,Assessing Global Markets) that leads students through the kinds of questions which help them understand how to find the information they need.
For the task of categorizing and classifying we provide them with certain heuristics adapted from library scientists such as Ranganathan, Dewey, Cutter and others. These principles include the following: there are as many classification schemes as there are queries - pick what works best for you; try to anticipate how you’re likely to use something ("role" approach) before classifying; organize from the general to the more specific, putting items into the most specific category; subdivide when you have a new category (we use the rule of thumb of 7 plus or minus 2 to clump material).
For the task of naming things/making distinctions, we provide them the following heuristics taken from Ranganathan, Bliss, Dewy, Cutter, Martel and others: use names that are meaningful to you; make names as complete as necessary and as short as possible to be able to identify content and minimize confusion; use unique terms for distinct concepts; use names, abbreviations, file extensions, etc. in a consistent manner; when there are two different ways of expressing the same concept, choose one term and reference the other (e.g., through hyperlinks).
For the task of evaluating and assessing, we provide them with the following heuristics, here adapted from a UCLA Library Web site developed by UCLA librarian Esther Grassian: be aware that a site may not be complete and accurate; question the purpose of the site and evaluate if there is evidence of any bias; determine if there are other sources that confirm or validate the information provided; examine when the site was last revised; question the authority or expertise of the individual or group that created he site; is there a way to contact the author or information provider? We also alert them to the fact that there are other Web sites that provide guidance in how to select and evaluate useful Internet resources.
The task of integrating and relating is left to the individual student. One student developed a Web site with hyperlinking of specific topics. Other possible options include partitioning the Windows environment, hyperbolic nets, … We have found that it’s the underlying file structure which is critical.
There are several conceptual strategies that our students have tried: the chronological approach, the functional, and the roles-based approach. The chronological approach is very easy to set up and maintain and it works extremely well during a short time period. However, it does not have good long-term search value and it requires you to think in terms of when you acquired the information, rather than in terms of what information you need. It stores information but does help you to integrate it so that it becomes knowledge. The functional organizational approach brings together like kinds of material together in one category so that it’s easier to search. It works well for a small number of topics. However, the larger the number of concepts, the more difficult it is to create and maintain the categories. Also, some concepts may cross functional boundaries and be difficult to identify with only one function. The role organizational approach facilitates searching - you look for information in terms of the context in which you will use it. However, working out the roles can be difficult and the roles change over time, requiring updating and modification of categories.
In the Jones and Thomas study previously mentioned, the authors concluded that fewer than 10% of their 1996 sample used any computer-based technologies within their personal information management systems. Those who did use computer-based technologies did not use them exclusively. They also relied on traditional pen and paper methods, as well. However, our observation at the Anderson School is that those who have adopted Palm Pilot for personal calendaring and address books are abandoning paper and pen methods. Knowledge management tools are not that far behind. The rate of technology introduction has occurred at a faster pace at some schools, but the technology introduced and its potential uses are not that different. We attempt to provide a framework for our students to use the technology to manage the information they encounter and to transform it into knowledge. Knowledge management is the challenges for the 21st century.
(1) Convocation speech by David Shulenburger at the Conference on Scholarly Communication and the Need for Collective Action, Roundtable on Managing Intellectual Property, November 13-14, 1997
(2) OCLC Knowledge Access Management Institute - points 1-3 of paragraph
(3) ‘Today’s WWW, Tomorrow’s MMM: the Specter of Multi-Media Mediocrity", T. Matthew Ciolek, in Educom Review, Sequence: Volume 32, Number 3, May/June 1997 (http://www.educom.edu/web/pubs/review/reviewArticles/32323/html)
(4) "Lost in Cyberspace", David Brake, in New Scientist, June 28, 1997 (http://www.newscientist.com/keysites/networld/lost.html)
(5) "AACR3? Not!", Michael Gorman, in The Future of the Descriptive Cataloging Rules, edited by Brian E. C. Schottlaender, Chicago: American Library Association, 1998.
a Davenport and Prusak, Working knowledge, HBS Press, 1998.
b Lethbridge, Timothy Christian, Practical techniques for organizing and measuring knowledge, University of Ottawa, Doctoral Thesis, Canada, 1994.
d Ulrich’s Directory of Periodicals, New York, Bowker, various years; UNESCO Statistical Yearbook, New York, United Nations, various years; International Book Publishing : an Encyclopedia, New York, Garland, 1995e Stephen Jones and Peter Thomas, "Empirical assessment of individuals’ ‘personal information management systems’", Behaviour & Information Technology, 1997, vol. 16, no. 3, p. 158-160.
f Karl Erik Sveiby, The New organizational wealth: managing & measuring knowledge-based assets, San Francisco : Berrett-Koehler Publishers, Inc., 1997, p. 29.
g Nonaka and Takeuchi, The Knowledge-Creating Company, 1995, p. 71.
Jason Frand and Carol Hixon December 1, 1999
原文:http://www.anderson.ucla.edu/faculty/jason.frand/researcher/speeches/PKM.htm