Constance Korol oversees the Institute of Business Forecasting & Planning group on his link to work”>LinkedIn. (No, she isn’t the meaner one I will be referring to, but she can swing a nasty rolling pin if you get out of line.) This week Constance posted a Wall Street Journal article “Follow the Tweets,” and solicited feedback from the group. The article described a method of monitoring comments on Twitter to predict where next week’s sales are heading.
While I was initially excited to hear about a new method to improve forecasting, it turned out to be just another tease. As often happens (even for some forecasting consultants and software vendors), the article’s authors had confused forecasting with fitting a model to history. They never actually did any forecasting with the proposed method to test how well it worked, and provided no convincing reasons to believe it would be of any use. Being the kind of unpleasant person I was raised to be, I left a relatively uncomplimentary review of the article for the IBF group:
The authors state:
“Thus, by observing the number of positive tweets after opening weekend, we could have predicted that the movie would continue to generate revenue in the coming weeks.”
To claim that they “could have” predicted something assumes that their model works. But they fail to give us any reason to believe the model works, since they don't appear to have actually done any forecasting with it. (Forecasting is predicting future behavior, not fitting a model to past behavior.)
Would like to see these folks actually forecast something with their model, and then let us know whether the model is useful. So far, sounds like they have fit a model to the historical behavior of 3 movie releases. Fitting a model to history is not the challenge — the challenge is to create good (or at least useable) forecasts. I would be very happy to see them demonstrate the ability to create good forecasts using Twitter data — we all need ways to improve our forecasts — but so far they have not shown evidence of that capability.
This caused me an (ever so slight) twinge of remorse for being so harsh.
Fast forward to one Deb Di Gregorio of Camarès Communications. Deb commented on the article via a wonderfully fiendish blog post “Biz Flash: Tree Falls in Forest! No One Tweets!” which begins thusly:
A most remarkably huge section of the Wall Street Journal was wasted today on Follow The Tweets an article by three professors who set out to attempt to predict which movie would do best at the box office based on Tweets.
It doesn’t get any nicer after that, as she states later on:
This only proved one thing: there are enough Tweets about movies to give three professors enough data to write something utterly inane and enough ignorant editors at the WSJ to approve the article.
The purpose of The BFD is to encourage new ideas, creative thinking, and alternative approaches to the problems of business forecasting. But this does not entail the blind promotion of each new forecasting method that is developed. On the contrary, The BFD is a forum for exposing practices that are either unproven, or demonstrably wrong. Like the value of Google search statistics (which was discussed here on July 10), the value of Twitter data to aid in forecasting remains to be proven.
And I thought I was mean…