Page 268 - DLIS402_INFORMATION_ANALYSIS_AND_REPACKAGING
P. 268
Unit 12: Content Analysis
4 = Slightly unfavourable Notes
5 = Very unfavourable
6 = Mixed: both favourable and unfavourable
When calculating averages, the “6” codes are considered equivalent to “3”. The range (the difference
between the highest and lowest judge) is also recorded, so that each story with a large range can be
reviewed.
5. How noticeable the story was
This is complex, because many factors need to be taken into account. However, to keep the project
manageable, we consider just three factors. For newspapers, these factors are:
The space given to the story (column-centimetres and headline size)
Its position in the issue and the page (the top left of page 1 is the ideal)
Whether there’s a photo (a large colour one is best).
For radio and TV, the above factors are modified to suit those media, with an emphasis on time
instead of space.
Each of these three factors is given a number of points ranging from 0 (hardly noticeable at all) up to 3
(very noticeable indeed). The three scores are then added together, to produce a maximum of 9. We
then add 1 more point if there’s something that makes the story more noticeable than the original
score would suggest (e.g. a reference to the story elsewhere in the issue, or when this topic is part of
a larger story).
6. Anything unusual about this story
The coders write comments when they notice something unusual about the story, specially when
an extra point is added in the previous item. These comments can be referred to later when trying to
make sense of the results of the content analysis.
All this information is recorded first on a one-page printed form, then entered into a spreadsheet, so
that weekly tables and graphs can be produced, showing trends in coverage and differences between
media outlets, specially the balance between the amount of coverage and its favourability.
This example (newspaper coverage of an issue) is actually a much simpler task than the first (TV
violence). If it appears more complex, it’s because I’ve covered it in detail, to show exactly how
quantitative content analysis can be done. It’s simpler because we know exactly what we are looking
for: to relate changes in media coverage to changes in public opinion. For TV violence, on the other
hand, it’s more difficult to decide exactly what to look for, and even what “violence” is. (Angry
words? Slamming a door? Casual mention of a death? And so on: many decisions to be argued
about). If you’re a novice at content analysis, don’t begin with a topic as complex as violence.
Example 3: technology diffusion with multiple coding
Example 4: counting words in comments
This example is about automatic content analysis, based on a survey organized for a forum on the
future of Ipswich, an Australian town. 390 people living in the town were interviewed, and asked
their views of the town’s future. The open-ended answers were typed into a computer file, and
TACT software (designed for literary content analysis, but useful in this context too) was used to
identify the main themes. This was done by comparing the frequency of keywords in the comments
with those words’ frequency in normal English. To avoid being overwhelmed by common stopwords
such as the and and, the program ignored these words.
By looking at these Key Words In Context (KWIC) I found a small number of comments that
summarized most respondents’ opinions on these issues. Though this method is much less subtle
than normal coding it’s very quick - which was essential on this occasion.
LOVELY PROFESSIONAL UNIVERSITY 263