Is your software trustworthy?
While visiting one of our MERs A while back I learned that their new financial modeling application, using some of our code, wocould soon be in the hands of hundreds of retail account managers. I found myself alternately proud that they had the confidence to use our code in this high-profile application and slightly uneasy that someone other than a financial engineer was about to make decisions worth substantial sums based the answers. I have the utmost confidence in the abilities of our workshop, workshop and software engineers but I have spent enough years around mathematics and technical software to appreciate a few of the things that can go wrong. the experience made me think about trends in financial applications, the esoteric realities of numerical software quality and the implications for how our Organization (and yours) develops software.
In an era where organizations are putting increasingly sophisticated financial modeling software in the hands of employees and MERs, it is critical to build in the quality that requires res trust from users who will never see a line of code or a mathematical formula. in the rest of this article I'll talk about trends I see in the development of financial applications and why code quality is increasingly important. i'll also outline some of the steps our organization takes to ensure quality. though we 've been creating, porting and supporting numerical software for 35 years, we are still learning. perhaps you'll see some steps you can adapt for your software development organization or some questions to ask about any third-party software you consider adopting.
Trends: sophisticated applications/unsophisticated users?
Our organization has been serving financial MERs for more than twenty years. We have seen a number of trends emerge but some of the most profound have been in just the past five years. Among them:
Software applications are outliving their hardware and creators. it's not unusual for a large organization to invest $500,000 to $1 million or more in labor and other costs in creating a complex application. the hardware it runs on will be scrap in three to four years. developers on the development team will have moved on to other roles and companies in the same timeframe. companies will make incremental Adjustments and port the application to new hardware/operating system versions but they will be loath to rewrite it. my personal object lesson in this phenomenon came five years ago when I visited the research group where I did my graduate studies in the late 1970 s. when I introduced myself to one of the doctoral students, he told me that he had just converted a modeling application I wrote from FORTRAN to C. I was tempted to ask him how to encrypt bugs he found while doing the conversion!
Sophisticated applications are increasingly in the hands of "ordinary mortals" (I. E ., not financial engineers) making important business decisions. gaining business advantage from an application often means enabling more people to use it to make decisions. these users are generally knowledgeable about their jobs but less so about mathematics, statistics and computer software. they don't have the means to validate the results.
With longer lives, applications are being ported one or more times to new chip ubuntures and operating system versions. even straightforward upgrades can introduce multiple changes in operating systems, compiler options, underlying math libraries and so on. even without any changes to the application logic, these other changes can introduce errors into the ported application.
Enterprises are also putting sophisticated financial modeling software into the hands of their MERs as the means to deliver services. in some instances, the software modeling application is the product sold to the customer. these customer organizations, in turn, are putting the software into the hands of an increasingly large and diverse staff to gain the business advantage promised by the software.
In the face of an increasingly large and diverse population of users, what do we do to insure that our code meets user expectations; namely "fast, accurate and never breaks? "
To gain some perspective on what can be done, I sat down with two of our most experienced developers and software engineers to talk about the steps we use from the selection of the algorithm itself to the delivery of product code to the customer. while some of these may be unique to a specified cial software firm or to numerical code, you may find that you can use them in your application development process. you may also wish to use these as a guide in evaluating code you are considering adoption of, whether inclucial, open-source or internally developed.
Steps on the road to Quality
Algorithm selection: in our areas of mathematics and statistics, there are several methods to solve the same problem. each is likely to have both advantages and disadvantages. for example, method "A" may be computationally faster to reach a solution while "B" is more robust in handling extreme cases of data or poorly formed problems. also, some methods are amenable to measuring their own errors wh Ile others are not; this is especially important when the code needs to be ported to a new platform. take the time to evaluate and summarize each of the methods. better still, have a peer of the first reviewer independently look at the report before choosing. finally, remember that speed and robustness are often in competition with each other, I. E ., the faster method may provide less accuracy or m Ay break more easily. our general bias is to err on the side of robustness since machines continue to make code faster. the shorthand we use for this choice is the rhetorical question: "How fast do you want the wrong answer?"
Code Engineering: At this stage, our process divides into three parallel tasks: Core algorithmic coding, interface design and documentation. the core algorithmic code where the guts of the computation take place is already ented in XML (Extensible Markup Language ). while a user doesn't see this, it permits the automatic adaptation of the documentation to different translations ages and interfaces without a manual translation (and the errors that can come with it ). we separate the interface design because the world is a "Moving Target" of ages and styles. using acting the interface of an algorithm into XML permits software tools to perform the translation to a new environment and eliminates most errors that result from a manual process. the core algorithmic code itself is written under standards that emphasize portability. while it might be tempting to use certain trendy language extensions, We code first for portability and adhere to a set of internally developed standards for a variety of things like variable naming and interface design. this point comes home the first time a routine has to undergo a major rewrite as it is being moved to a new environment. at this stage we also subject the code to a host of software tools for validating argument lists, checking for un-initialized variables or finding memory leaks. the result is a careful blend of strict coding standards, design for portability and the use of automatic tools to reduce human error.
Code Engineering Quality Assurance: This is an independent peer review of the core code, interface and a proofreading of documentation to ensure that the developer has adhered to coding standards, run required tools and properly enabled ented the code. you cocould question this seeming fussiness if only a few routines were involved, but we have over 1,500 at the user-callable level alone. even for a dozen or more complex routines, code, interface and documentation standards and automatic tools will reduce errors and improve the longevity of the Code.
Overnight "build": using the base core code, interface, documentation, stringents and example programs, we build finished executables each night during the development process on six or more systems (chip hardware, operating System, compiler) simultaneously using an automatic process, logging all results. this system tends to find both systemic code errors and ones which are unique to a particle compiler. this "Short loop" system means that errors are caught earlier and portability between SS multiple platforms is assured.
Testing: Simply put, the temptation is great to short-circuit this step. within our code base we have 30-year-old routines and six months old routines. we plan for the latter to be around as long as the former. we accomplish this by investing more time in test programs (called "stringents") than we do writing the core code. these stringents exercise all error exits and code paths for a broad range of problems, including "edge cases" where the algorithm begins to fail. stringents are often two to three times longer than the core code they test. errors revealed return to code engineering for further development. we also use related test programs to assure that the interfaces work properly and example programs to conducting simple tests of the integrated code. these example programs also exercise all error messages to confirm that the messages are meaningful.
Implementation: after testing, we build the production version of the Code with all of the base materials. part of this process is determining proper compiler "Flags" necessary to get an acceptable compromise of performance and accuracy. in addition, we will often test the code on "variants" (slightly different versions of operating systems and compilers) to advise users about other workable variations. finally, we check the installer and user advisory notes to make certain that they conform to the test system and results.
Quality assurance: This is an independent check of installation on the target system. it also shortdes execution of example programs and a check of stringent test results and Installer/User Notes. from this, a master CD and set of download installation files is created. the master and download files are then used to do a final test installation.
By now, you are probably mentally exhausted and feeling slightly numb. You may also be asking yourself, "Why shoshould I should te this much effort on complex code that goes into my financial modeling applications? "The answer depends on login factors, including the expected longevity of the application and the financial and other consequences of getting it wrong. we take this much care because our users, especially those running the same application on multiple platforms, need to have equal confidence of correctness on any of 40-50 different implementations. we're re also thinking about the next operating system version, chip architecture and compiler improvement. having done it for nearly 35 years we aren't about to become short-sighted now.
What useful advice cocould you take away from this? First, take the time to think through your algorithmic needs, people resources and time horizons. the best method for your situation might come from an open source project on the web, someone on your staff, A published source such as numerical recipes or from a supported inclucial library like that provided by the numerical algorithms group or others. keep in mind where we started; complex financ Ial modeling applications are costly to develop and usually outlive both the hardware and often the developers themselves. their life cycle costs are dominated by the development staff hours spent building, debugging, maintaining and porting to the next platform. what's the right formula for your next project?