I spoke in an earlier post about my organization's new push to evaluate our work. One of the ways that we're using to do that is to track and record citations of our publications - books, special reports, etc.
I've been working on creating a database for these records, and one of the questions that I've been asking myself is whether or not this is truly a valid measurement of the impact of our work. There are so many factors that affect when and where a publication gets cited, that it's hard to imagine this is really representative of their impact on the community of practice. It also leads me to wonder if, like search engine optimization, it's possible to practice citation optimization. Once we have these lists of citations out of the unwieldy spreadsheets they currently live in, I'd love to do some research on what items are getting cited more often than others. Do catchy titles make a difference? Do some authors get cited more than others? Has the number of citations increased as the organization has gained longevity and prominence? What about the length of the work?
In theory, if you could find answers to all these questions, you could design a publication that was optimally designed to be cited by other works, even if the content inside was of lower quality than an un-optimized publication. Given this fact, can citation frequency really be relied upon as an accurate measure of our work's impact factor?
I would argue that it can't necessarily, but it's still interesting to see what sorts of uses our publications are put to. And if nothing else, it may prompt further progress by our own staff as they are able to see analysis and criticism of their work.