Wheeler: Not in the way Microsoft wishes.
TCO is extremely sensitive to a
specific circumstance, so a TCO for one situation doesn't
usually apply to other cases. I'm sure that there are cases where Microsoft's approach
has a lower TCO than alternatives, so in those specific cases it's true.
However, there are also cases where open source software or
Linux-based solutions have a lower TCO.
You really have to consider all the costs for your specific situation,
and your results may differ.
That point is made by some of the papers
that Microsoft is referencing, but it's not restated in Microsoft's
"Get the Facts" Web pages.
Also, note that almost all of their "independent" studies were actually
funded by Microsoft. You should consider suspect any
study of a vendor that's funded by that vendor -- especially if that
was the only funding.
NewsForge: Why are self-funded studies so suspect?
Wheeler: The short answer is, "because organizations self-fund public studies
to give them good press, not necessarily to give customers a full understanding."
And I say exactly the same thing about IBM, Apple, Red Hat, or anyone else
who funded a public study reviewing their own products.
Companies have a shareholder obligation to maximize profit,
not to provide truly independent assessments to potential customers.
I doubt that these studies just made up their figures, but
the problem with self-funded studies is that
it's so easy to skew studies in more subtle ways:
A funder can control the study's setup.
For example, a funder make itself look good by asking an
evaluator to only look at a few specific factors (ignoring others),
or only look at specific environments and situations.
In the old
1999 Mindcraft studies,
for example, Microsoft chose to only evaluate an extremely unrealistic
environment favorable to itself.
A funder can control exactly how the study measures its results.
That can make a significant difference, since different measurement
approaches can produce wildly different results.
If the study uses samples, it's easy to bias a sample
to produce biased results.
A funder can also control the study outputs.
For example, maybe many factors were measured, or many separate studies were
made, and only the favorable ones were reported.
Conflicting results could have been suppressed.
Or perhaps some of the key controlling variables weren't explained or controlled.
The results can even be correctly described in a misleading way
(for a humorous example, see the information about
But let's also give credit where credit is due -- at least most of the
studies acknowledge that they were funded by the vendor.
Back in 1999, when Microsoft funded the original Mindcraft study, early
reports didn't acknowledge Microsoft's funding at all.
Yet that study was funded by Microsoft, the Microsoft systems were
specially optimized by Microsoft engineers for the test, and the
tests (including those of Microsoft's competitor) were even performed at Microsoft.
There was an understandable outcry!
In contrast, if you look at the current crop of studies carefully,
all but one of the "independent" studies referenced by Microsoft
acknowledge that Microsoft funded the study
(I didn't find any such statement from Embedded Market Forecasters;
perhaps it was truly independent).
IDC, to its credit, places the statement "Sponsored by Microsoft Corporation"
in bold letters right under the author names, so it's hard to miss, but its report isn't even in the independent list (though it's in another list that
So I commend those
study authors for acknowledging this potential conflict of interest.
In a few places, Microsoft's "Get the Facts" page even acknowledges when
a study was funded by the company, but it really should specifically identify
every self-funded study (not just some of them).
There may be useful information in the self-funded studies, but I
don't have any way to be confident with them.
There may have been no manipulation at all, but the
money flow creates a strong incentive for it, and there's no way to know otherwise.
The Object Watch study
does claim that there was no editorial control, and suggests that the
funding wasn't total -- that's very
encouraging, but it's also very hard for someone like me to verify.
Most of the other studies don't even say that.
The problem is that self-funded studies have a built-in conflict-of-interest
that an independent observer can't really examine.
Even indirect funding can be a problem ("give me a good report, and I'll give
you some/more money later for something else").
What's really needed is more
independent studies that are clearly independent, and
not funded directly or indirectly by a vendor.
NewsForge: You often come across as an ardent Linux partisan. Aren't your studies suspect because of that perceived bias?
Wheeler: Actually, I'm not a Linux advocate.
I'm an advocate for considering the use of
open source software / free software (OSS/FS).
As I clearly state in my
Look at the Numbers!" paper,
I think it's a serious problem that
"many people fail to even consider OSS/FS products."
In fact, my paper's goal is to
"show that you should consider using OSS/FS when you're looking for software"
(and many more consider OSS/FS now than when I first wrote the paper).
But as I also note in the paper,
"I use both proprietary and OSS/FS products myself."
I work hard to be unbiased.
In particular, I wasn't paid by either side (proprietary or OSS/FS)
for writing my papers contrasting them.
You can (and should) "follow the money," but in my case, you'll
find I have no incentive to be generous to either side.
Do I perceive some advantages for OSS/FS?
Sure, there's no point in considering an option if it has no advantages.
OSS/FS tends to be more flexible (since you can modify the code),
and the openness of the code has fundamental advantages for security.
Mature OSS/FS tends to have a lower initial purchase cost, though total cost
calculations are more complicated.
Most importantly, OSS/FS frees users from the control of
any particular vendor; a user can later self-support or
switch to a different supplier of that same software,
options unavailable to proprietary users.
I believe in the value of competition, and anything
that introduces competition into a market (as OSS/FS is doing)
usually has a very positive impact.
But a particular proprietary program can have key advantages over a particular
OSS/FS program, and that's the sort
of comparison you have to consider on a case by case basis.
NewsForge: Who can we trust to do independent studies? Is anyone truly independent and
Wheeler: In the end, the only way to be really sure that you have
unbiased results is to do the comparison yourself -- which you have to do anyway, because some measures like
total cost of ownership (TCO) and performance are incredibly sensitive
to specific environments.
Before you do your own measures, you can certainly try to gain
insight from other reports.
I highly recommend trying to identify how a given report was funded,
and giving more weight to reports that were clearly
not paid for by any side.
But even potentially biased reports can give you some useful data,
as long as you're careful with them.
A report paid to review a vendor's own product will often
raise issues that vendor thinks are to the vendor's advantage -- but
those issues might be very important to you, and thus worth thinking about
(and examining the competitor for that attribute).
Also, these vendor-sponsored papers often identify who that vendor
thinks is valid competition -- so make sure you include that
other vendor in your evaluation!
For example, Microsoft has information comparing OpenOffice.org to
(previously noted in Slashdot).
So as an acquirer, that's a tip-off that if I'm
thinking of buying/upgrading Microsoft Office, I'd better also
NewsForge: You say, "What's really needed is more independent studies that are
clearly independent, and not funded directly or indirectly by a
vendor." Who will do these studies? Who will pay for them? And do you know of any
already out there we should look at?
Wheeler: If that were easy to answer, there wouldn't be a need for more
independent studies . But I think part of the answer is in
groups and organizations that are funded by potential customers,
not vendors. Consumer Reports is a good example of this
(though they don't focus on software reviews).
Magazines can sometimes play this role, though magazine funding
is often dominated by vendor advertising, making it difficult to stay objective.
And I can imagine organizations banding together (each offering a
certain amount of money for a particular review) until they can
actually fund a particular review.
Often some of the most interesting and objective studies are
from people who are really interested in investigating something else,
and through their investigations find interesting new information.
The Fuzz studies
were like that; here were academicians who devised a new method
for measuring reliability, and decided to use it to
measure both proprietary and open source software.
There was no monetary reason to report one way or another; they simply
needed results to demonstrate the method.
Reasoning, Inc., has used its tools to examine the source code of
proprietary and open source software; their goals are to market
the value of their tools and services, and don't care which software
Another great source is previous customers
who have already done the analysis themselves.
After all, if you're looking at the alternatives, others have probably done
so before you.
Please, please, please -- if you've done an in-depth analysis of products
on a particular subject, post them on the Web, or at least offer them for sale!
You'll get free advertising for your organization, and you'll get free
useful corrections and clarifications.
As far as what studies should be looked at, my
tries to identify any cases where I suspected a potential conflict of interest
for the ones claiming an OSS/FS advantage.
But in the end, as I said before, the best independent study is
the one you do yourself.