This month’s Software Quality Professional claims on page 51 that:
By having defined coding standards, developers trained in the use of the those standards are less likely to make certain coding errors.
The one thing coding standards guarentee is consistency, and, arguably, readability. But less errors? I grant that in theory, coding standards can prevent errors. For example, “Don’t use global variables”, “Every function should have an automated test” or “In perl, use auto-indexing in for loops instead of C-style ++” – something like that can decrease errors.
Then again, those are often best learned through mentoring and good craftsmanship, not code standards. Most of the code standards I have seen obsess over where to place the curly braces, what to name the variables, and how many spaces to indent.
In fact, I have seen so-called fagan-style reviews that focused entirely on that kind of slavish adherance to standard; hours spent without a single defect found that would actually impact a customer.
This is couched inside an editorial, not a journal paper, so I give the author a little wiggle room, but here’s my suggestion: If you want to make a statement like this in a professional journal, either provide a lot of supporting evidence, or be honest. “In my experience” is a great way to be honest; failing that, give at least one tangible example. Otherwise, we run the risk of coming off disconnected and enterprisy.
Is that too much to ask?