Page 193 - DLIS402_INFORMATION_ANALYSIS_AND_REPACKAGING
P. 193
Information Analysis and Repackaging
Notes only automation. That means gaps when the user lacks the terminology, or gaps in a topic that the
search engine seems to skip over.
It’s About Learning Pathways
A good user assistance system should leave a user more able to cope with the next question he or she
has, by adding a bit of explanation, or pattern recognition, or map-like structures that show how
information is accessible. That learning-for-next-time is a piece that we need to address. Let’s say a
user found what was needed this time with a full-text search engine. Great. Next question, no, he
found 90 hits with full-text search and gave up.
We need to make that number smaller on our end (with good weighting, metadata, and vocabulary
control). But we also want to help users make the results more focused on their end. How can we do
that? How do we help them recognize the patterns in search that work in this particular body of
information? We could do it by exposing some pieces of the metadata in a non-threatening alternative
access mode.
We have figured out some great ways of doing it for specific tasks: walking a user through a decision
path, exposing contextual help, exposing tutorials. And we have figured out some standards that
the user learns to expect: exposing indexes, TOCs, cross references, or related topics. We need to
figure out how we can expose structure-to-learn pathways depending on the question’s context and
the topic’s context.
If we want the system to scale and to meet challenges like changed and updated information, that
will require aboutness metadata on the topic side, and predictive ability on the search side, and
that’s where the indexing skills come in. Building up a body of controlled aboutness information is
a task that takes off from indexing, and reforms and reshapes it into something that can serve multiple
purposes. For example, if all the topics in a help system have metadata attached, dealing with product
name, task, version, and aboutness, results of a search could automatically lead to matched topics
with the same metadata attributes, regardless of whether the topic lives locally or on the web, and
regardless of whether it has been changed recently. But it takes a very controlled set of aboutness
metadata, in place, and followed rigorously.
Broadening the Index World
The first steps to this type of controlled language sets involve analyzing types of content and types of
questions, and creating controlled vocabularies, so that your data-to-be is standardized across all of
your documents. This involves developing the standards, checking data across all documents, and
reworking where some content has been analyzed in the metadata too much, and other content not
enough. That’s human work, and indexing skills are a natural for it. You can rely on automated
concordances to sample what is in each body of knowledge, but the final analysis still needs to be
human, and matched to the needs of the company and the users.
At some point you will notice overlapping areas, where help crosses over the web forums, and
where one structure could be devised multiple ways. That’s when this kind of work becomes highly
political and cultural — whose structure of the universe do we take as the “real” one? As soon as
you get into those questions, it becomes highly charged, because no two people structure content
the same way.
These are important categories to the person who wrote the list. The way we break down content as
content developers and representatives of a company’s product is also cultural, and as writers and
editors of content, we have a slightly different culture than our users do. Our notions of what user
assistance looks like may resemble this animal category list to some of our users. Our categories of
tasks and concepts may not make any sense to them. And our aboutness metadata must reflect their
categorizations as well as our own, or their searches may not get good results from our data.
188 LOVELY PROFESSIONAL UNIVERSITY