Wednesday, December 30, 2009
Peer Code Review - Keep your source code out of court
"Don't let your firmware source code end up in court! Adopt a coding standard that will prevent bugs and start following it; don't wait a day. Run lint and other static analysis and code complexity tools yourself, rather than waiting for an expert witness to do it for you. Make peer code reviews a regular part of every working day on your team. And establish a testing environment and regimen that allows for regression testing at the unit and system level. These best practices won't ensure perfect quality, but they will show you tried your best."
Read the complete The lawyers are coming! article to learn more.
Thursday, August 6, 2009
Peer Code Review - Improving Productivity
A good workflow can make or break a quality initiative. For example, assume that someone delivers a mandate: “All code must be peer-reviewed.” If the development manager responsible for implementing the peer code review interprets that as meaning, “No developer can check in code before it is reviewed,” he is creating a tremendous roadblock in the developers’ natural workflow. This is a prime example of a quality initiative that would hurt productivity...
Read the complete The future of quality lies in productivity article to learn a more productive, less painful approach to peer code reviews.
You can also access this article at the Parasoft Resource Centers for:
Friday, July 31, 2009
Using Static Analysis as Part of Code Review
"As static analysis tools have become more sophisticated, their role in the software development process has become a subject of debate. Can a project team use a static analysis tool instead of other, presumably more labor-intensive steps in the normal process of coding, testing, verifying, validating, and ultimately, certifying critical software? The answer is an unequivocal 'yes.'"
Read the complete Making Static Analysis a Part of Code Review article for their thoughts on how static analysis tools can ease the difficulty of reviewing unfamiliar code.
Wednesday, July 8, 2009
Code Review: Best Practices
Different people mean different things by code review. What’s your definition of code review?
First off, I think that the only practical peer code review process—for centrally-located teams as well as geographically-distributed ones—is one that that’s managed automatically. For example, using Parasoft’s Code Review module:
- Developers check in code to the source code repository as normal.
- A server-driven code review scanner automatically detects what code needs to be reviewed, generates code review packages that show the difference between the new code and the old code, and automatically notifies the designated reviewer(s) that a review is needed.
- Reviews are performed at each reviewer’s convenience from his familiar IDE (Eclipse, Visual Studio, RAD, Wind River Workbench, ARM RVDS, etc.).
- After examining each change, the reviewer either accepts it or requests additional revisions.
- If additional revisions are requested, the author is notified of the request, and the cycle continues.
In a nutshell, that’s how to make the code reviews practical.
To make them effective, they must be tied to requirements, with reviewers focused on determining if the code is actually doing what it’s supposed to be doing. If you don’t do this, the code review typically degenerates into developers scanning the code for problems that automated code analysis could find. That’s a shame because code analysis tools could do this scanning faster, better, and more accurately—and developers could be doing something much more valuable and interesting.
These two things are actually quite closely related: with a computer handling all of the tasks that are not creative, the brain can focus on thinking about the code in terms of the requirements.
Why is this focus on requirements during code review so important?
It’s the best way to identify improperly-implemented requirements—one of the three main categories of defects (along with missing requirements and poorly-designed interfaces that allow users to wander in unintended directions). With at least one other person inspecting the code and thinking about it in context of the requirements, you gain a very good assessment of whether the code is really doing what it’s supposed to. Automated analysis simply cannot uncover these algorithmic functional issues—this high-level analysis requires a human brain.
And what’s the value of code review workflow automation? What do you have against a good old-fashioned sit-down code review?
Well, with sit-down code reviews, the cost usually outweighs the benefit. The cost is significantly higher with sit-down code reviews. First, everyone has to figure out a time and place that’s convenient to meet. This is difficult for centrally-located teams, and nearly impossible for many geographically-distributed teams. Then the developers have to spend lots of time on preparation—trying to remember what code was changed and why, marking the changes, correlating the new version and the old version, and so on. Finally, you have all the time required for the review itself.
What’s worse, the benefit is typically lower with sit-down code reviews. They typically uncover fewer problems than automated code reviews do. Why? When everyone is sitting together in a room, they don’t give themselves enough time to really think through the code, identify all the possible problems, and come up with viable solutions. The brain does not work instantly; it needs time to think. That’s why it’s significantly more valuable to have a code review process that allows you to review code at your desktop, at your convenience, with enough time to vet potential problems and determine how to make the code more effective.
How does all this time thinking about the code during code review impact the team’s productivity?
Surprisingly, it actually improves productivity...
To read more, download Parasoft's complete "Code Review Best Practices" paper as a PDF.
You can also access this paper at the Parasoft Resource Centers for:
Tuesday, July 7, 2009
Webinar: Thursday, July 23, 2009
10:00 a.m. - 11:00 a.m. PT / 1:00 p.m. - 2:00 p.m. ET
An old developer adage states, "Test often, test early." Would you agree?
Undoubtedly, errors in embedded systems can pop up as late as the final stage of development—when running software on a target system. It's important that error prevention begin on Day 1 of development and remain on-going until completion.
Such a practice becomes more critical the greater the size and complexity of the software under development.
Join us as we show how to maximize quality assurance efforts and error prevention in embedded software from the get-go by automating the following techniques with Parasoft C++test:
Error Detection Runtime Memory Monitoring (while executing on target) Flow-based Static Analysis Unit Testing Quality Strategies (as part of development process) Coding Standards Compliance Human Code Inspections Workflow Management
Webinar hosted by: Parasoft
Date:Thursday, July 23, 2009
Time: 10:00 a.m. to 11:00 a.m. PT / 1:00 p.m. to 2:00 p.m. ET
Click here to register for this FREE Webinar.
Mark your calendar for Thursday, July 23 from 10:00 a.m. to 11:00 a.m. PT / 1:00 p.m. to 2:00 p.m. ET. We look forward to your attendance online!
