Software development - a lot more than programming

Glyn Ltd
By Magnus Unemyr, vice-president sales and marketing, Atollic AB, Sweden
Wednesday, 18 May, 2011


As microprocessors get increasingly more powerful, so does the appetite for additional software functionalities. Size of embedded systems software increases every year, bringing new types of problems.

The software industry agrees that the largest problems while developing software are no longer how to write the individual code lines; today’s problems are more related to managing ever-increasing code complexity, increasing development costs and delays, geographically separated project teams, and too low quality in many software products.

Still, it is surprising to see that almost all integrated development environments for developing embedded systems software focus on the same tasks as 20 years ago, mostly the traditional chain of editing, compiling and debugging code lines only.

With better development tools, covering a wider field of problems facing software developers in their everyday work, development times can be reduced and software quality can be improved.

It is also important that developers move from considering themselves as ‘programmers’, and understand that a professional embedded developer of today needs to be a ‘software engineer’ as well; ie, he or she needs to understand and practise the process of how to develop software well, not only write the actual code lines.

Admittedly, tools have been improved over the past 20 years, but it is still primarily the traditional steps of editing, compiling and debugging that are handled. Editors are, of course, better today; developers expect features like expand/collapse of code blocks, spell checking of text in C/C++ comments, functions for visualisation and navigation of the code structure, etc.

Today, embedded compilers handle C++ as well as C and code size is improved even if that is less important in the new powerful 32-bit devices. The difference in size between compilers from different vendors is marginal in most cases, and the free GNU compiler is in many cases even better than commercial compilers.

Debuggers are better too, but it is still functions for execution of code and inspection of variable values that are in focus.

By looking at tool support in a wider perspective, development teams can start to address the problems that really cost money, delay projects and cause companies to deliver software products of low quality. These problems are attached with a real cost, either in terms of money, calendar time or in damaged company reputation.

Modern C/C++ tools ought to extend the traditional features (editing, compiling and debugging) with new features covering support for design and documentation, team collaboration, traceability in code changes and better tools for improving software quality.

One way of improving the work methodology is to better describe what should be developed before coding starts. This can be done with UML (the unified modelling language), a standardised graphical language for visualisation of software using a number of different types of diagrams.

Examples of diagram types are class, activity, sequence, and state-chart. Atollic TrueSTUDIO is an example of a C/C++ tool that integrates graphical UML editors right into the C/C++ environment to give developers better possibilities for requirements capture, and to model the static structure as well as dynamic behaviour of the application.

UML tools for embedded systems have traditionally been expensive and often required a change of work methodology at the entire development department.

A much cheaper and softer way info graphical UML modelling is to choose a C/C++ environment with UML editors built in right from the start.

Most seasoned development engineers have experienced changed requirements, the code needs to be extended and modified, experienced developers leave the project and after some time, no one knows how the code works or why.

Furthermore, no one remembers when a certain change was made, why it was made or what the code looked like before the change.

A good solution that all companies ought to use is a version control system, essentially a database containing all files in the project, including all older versions of them from the project start. A good version control system will not only store all earlier versions of a file, but also old versions of the complete directory structure in the project.

A version control system gives full traceability, and it is easy to work out which program line was added by whom, when and why. It is also easy to revert to earlier versions and to create parallel code bases that can later be merged again.

Releases can be labelled so it is easy to restore the source code for a specific earlier release. A common function is to make a graphical ‘file-diff’ that visualises graphically which lines have been added or removed then comparing two versions (in time) of the same file.

A version control system is a must if several engineers work simultaneously in the project, especially if they are in different geographical locations. With a version control system, several developers can work in the same source code file simultaneously, and ‘check in’ their changes independently from each other.

Other developers can synchronise their local work copies with the latest changes from the server at select times. All developers thus have access to the latest changes in a file, independently of who made the change.

Version control systems are often classified as team collaboration tools and it is true that they are very useful (in fact, almost a must) in projects with several developers. But a version control system is equally useful in smaller projects with only one single developer, as it gives full traceability and it becomes easy to find earlier versions of the code and to track changes over time.

Atollic TrueSTUDIO, for example, even auto-generates graphical charts that visualise the code activities (such as commits, branches/merges and labels/tags) that have been recorded in the version control system server during the project’s lifetime.

Many popular version control systems are available on the market, both commercial and opensource. One of the most popular ones has been the now ageing CVS, which largely has been replaced by the newer Subversion.

Subversion is open-source and thus free of charge, is successfully used by many companies worldwide, and can be deployed on a team server or on a single developer’s computer.

A modern C/C++ IDE ought to have deep integration to popular version control systems.

In the same manner as it is important to keep track of code changes over time, development teams ought to keep track on all feature requests, bug reports (both new and fixed) and to-do items as well.

These issues are typically stored in a centralised issue management system (often called a bug database) on the server. Several bug database systems are available free, such as Bugzilla and Trac. Other popular bug database systems include Mantis and JIRA.

A modern C/C++ development tool should be able to connect to a bug database server and provide integrated features for listing, searching and editing of bug reports and feature requests right from inside the C/C++ environment.

Some tools go even further and integrate both code editing and debugging towards the bug database system.

Atollic TrueSTUDIO, for example, remembers which files were opened the last time work was done to a specific bug report, and if the bug report is activated (maybe weeks) later, the editor will automatically open the same C/C++ files that were opened the last time work was done on that bug.

Other features integrated into the product are possibilities to document the state of the debugger using screenshots, which can be added as a file attachment to a bug report. The screenshots from the debugger can be cropped and annotated with arrows and text, from inside the tool.

The next time someone opens the bug report, the screenshot from the debug session is available as additional information related to the bug report.

Finding errors using a debugger is often necessary, but to fix a bug, it must first be detected. It is far cheaper to find the bug before the test phase is started, not to mention before the product is delivered to customers.

Development teams should thus strive to find and correct bugs as early as possible in the development cycle.

Companies who develop software for safety critical systems (such as the aerospace industry) have become good at finding software errors even before the product goes into the test phase. This is achieved using a stringent requirements definition and code reviews where developers study each other’s code and thus identify potential problems.

Code reviews are considered by many to be one of the cheapest and best ways to improve software quality, but surprisingly enough there has been virtually no tool support for this until now in the embedded industry.

Modern C/C++ development environments ought to integrate code review functions, in the way that has already been done in Atollic TrueSTUDIO. Using this tool, a project manager or a team member can create a code review session by defining what files in the project shall take part, and what team members shall become reviewers.

Reviewers can then study the source code files in the editor using their own computer, and add code review comments to various source code lines using a couple of mouse-clicks in the editor. Review comments are classified by the reviewer (such as logic error, portability problem, maintainability, etc) as well as priority (such as critical, major, minor etc).

In the second phase of the review, all reviewers sit down and discuss all the different review comments together in a code review meeting and optionally assign them to different team members for fixing.

Using this simple methodology, software quality can be improved. Reviewers can study the code in their own time and all team members taking part in the review meeting improve their coding skills by learning from the colleagues and their mistakes.

Now that functions for code reviews are getting integrated into professional C/C++ environments, there is no longer any excuse not to use this methodology, at least for the most critical parts of the application.

The manual source code review activity can be extended with static source code analysis. This is the process where a software tool analyses the source code of an application and automatically detects potential bugs or other types of problems in the source code.

Most tools that perform static source code analysis check the coding style versus a formal coding standard (the most popular one in the embedded industry is currently MISRA-C:2004). The coding standard typically limits the programmer’s flexibility and only allows using source code constructs that promote safety, reliability, maintenance and portability.

Another important feature of some static source code analysis tool is the capability to provide code metrics, which essentially is statistics about the source code. Code metrics can, for example, include the percentage of lines that contain comments, or information about the complexity level of each C/C++ function in the project. Overly complex functions should be rewritten into a simpler coding style to reduce the risk of bugs and to simplify maintenance.

Static source code analysis should be integrated in the C/C++ IDE to simplify its daily routine use, as is the case with the TrueINSPECTOR tool that integrates into Atollic TrueSTUDIO. TrueINSPECTOR lists all detected coding standard violations, but also displays graphical charts providing overview information, as well as providing detailed information on each coding rule and displaying bad and good code examples as a teaching aid as well.

A common misconception is that a static source code analysis tool is only needed at the end of the project, where some violating code lines can be fixed at the end of the project.

In fact, nothing is further from the truth. No one will start to rewrite code with thousands or perhaps tens of thousands of rule violations when the code is completed.

The correct approach is to use this type of tool at least daily, to ensure a gradual and iterative development where all code additions are checked and fixed as they are added.

Most embedded projects do not use a formal test methodology, although more and more teams look into the use of unit tests. Unit tests are essentially function calls into a C/C++ functions, where each function call uses a different combination of input parameter values, to drive different execution paths in the function.

But writing unit tests takes time, is boring and usually does not cover all the important execution paths anyway. Other problems are that development teams frequently have to focus on completing the code and no time is available to keep the unit tests up to date as the code changes during development. Source code and unit tests thus get out of sync and the unit tests become more or less useless.

Once the unit tests are written, additional problems arise regarding how to build, download and run them in a target board. There are several simple unit test tools for PC developers, but they rarely manage compilation, downloading and execution of the test suites in embedded boards.

A better approach is to use a full embedded test automation system. Such tools have not been common in the embedded industry previously, but new tools like Atollic TrueVERIFIER now bring test automation capabilities to embedded developers. An additional benefit is that it is fully integrated in the C/C++ IDE.

TrueVERIFIER analyses the source code of the application and auto-generates unit tests with important combinations of input parameter values to drive as many important combinations of execution paths in the function as possible.

Once the unit tests have been auto-generated, they are auto-compiled and auto-downloaded to the target board using the TrueSTUDIO compiler and debugger. Execution is then performed in the target board with dynamic execution flow analysis to measure the code coverage.

Once the test suite is completed, test results and code coverage information is uploaded to the IDE. TrueVERIFIER measures code coverage on MC/DC level, the test quality level required for flight control system software.

Once code is developed and tested, it is of importance to understand the quality of the testing being performed. It is of no use to conclude that testing is completed with successful test results, if the test procedures do not test more than a fraction of the software.

The majority of the software might still be untested, and any testing of those parts might well result in test failures, should they have been run!

Code coverage analysis (dynamic execution flow analysis) is commonly used to study what parts of the code have been tested, and hence measure the test quality. There are many different types of code coverage analysis, from very simple analysis up to very stringent types.

Code coverage analysis is often classified formally. The more advanced types of code coverage analysis (such as MC/DC described below) are often used for measuring test quality of safety critical software.

As an example, RTCA DO-178B (a standard for development of flight safety critical software) requires MC/DC testing of software on ‘Level-A criticality’, the most critical part of airborne software, where a software error can lead to a catastrophic situation with loss of aircraft or human lives.

Many projects also outside the aerospace industry would benefit from better control of what has been tested. In particular, this is valid for companies with high production volumes or products that are difficult to upgrade in the field.

The same goes for products where the supplier wants to keep its good reputation and where badwill can be costly for the company.

For these reasons, code coverage analysis ought to be used to verify whether the software has been tested well enough before delivery to customers.

Examples of different types of code coverage analysis are:

  • Statement or block coverage: This type of code coverage only measures how many of the code lines or code blocks have been executed during a test session. It does not measure how branches in the execution flow affect which code lines or code blocks become executed.
  • Function coverage: This type of code coverage only measures which or how many of the C/C++ functions have been called during a test session. It does not measure which or how many of the function calls in a code section is actually executed.
  • Function call coverage: This type of code coverage also measures which or how many of the C/C++ function calls in a code section have actually been called during a test session.
  • Branch coverage: This type of code coverage measures if all alternate branch paths have been executed in a code section (such as both the if- and the else- part in an if-statement, or that all cases have been executed in a switch-statement). Branch coverage typically requires a code section to be executed several or many times, as all alternative branch directions must be tested.
  • Modified condition/decision coverage (MC/DC): This is a very advanced type of code coverage. It extends Branch coverage with the additional requirement that all subexpressions in complex decisions (such as in an if-statement) must drive the branch decision independently of the other subexpressions. This typically requires the code to be executed many times, with different combinations of values that fulfil many combinations of subexpression values.

Code coverage analysis on MC/DC-level is, of course, very complicated without sufficient tool support. There are, however, tools that do this automatically, such as TrueANALYZER.

The fact that the code coverage analysis is run in the target system is important. If it is done in a simulated environment on a PC, interaction with the target system (users push-buttons, other systems send communication packages etc) it cannot be done properly. Timing and code execution paths might become different, thus depreciating the value of the analysis.

The software industry has been progressing rapidly over the last few years, and developers of Windows software have very powerful tools at their disposal. At the same time, tools for development of embedded software are still more or less on the same level as years ago, and only offer features for editing, compiling and debugging for the most part.

It is time to give embedded system developers proper tools, in the form of highly integrated products that cover a much wider set of problems. Such tools are already available and will give development projects better possibilities to deliver well-designed software, on time and on budget, with improved quality.

Related Articles

New tools that help software testing

Releasing a product with bugs is potentially very expensive, especially considering the cost...

Faster code writing may not speed applications

The size of embedded software is increasing every year. In fact, embedded systems contain...

Removing the bugs from embedded software

Releasing a product with bugs is potentially very expensive, especially considering the cost...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd