Textbooks on software engineering prescribe to check preconditions at the beginning of a function. This is a really good idea: the sooner we detect that the input data or environment does not match our expectations, the easier it is to trace and debug the application. A nice function with precondition checking refuses to “work” if the preconditions are not satisfied.
The next question is: how exactly should our function refuse to work it detects that an unsatisfied precondition? I see the following possible answers to this question (sorted from the least invasive to the most destructive):
The least invasive approach, to repair and silently continue, is a bad idea. An application consisting of many ‘intelligent’ functions doing something despite of erroneous input would be extremely difficult to debug and use. Such an application would always return an answer but we would never know if this answer is correct at all.
The second approach, to return an error code, requires a lot of manual work. Not only we have to establish different error codes for different situations, we have also to generate them and the caller must not forget to check them. As common experience shows, we do forget to check error conditions…
Exceptions are much better since once an exception is thrown it will be propagated to the callers until someone catches it. So the programmer’s burden of checking the error codes disappears. Or does it?… In fact it gets replaced by the burden of specifying exception handlers at the right places and by the burden of remembering that almost any line of the program can be interrupted by an exception. If we want to make our program not only ‘exception generating’ but also ‘exception safe’, then we have to consider many possible execution paths – with and without exception. This turns out to be quite a feat in itself. If you want more gory details, consider Exceptional C++
The last choice is the easiest one. If the preconditions are not satisfied, simply abort the application. This is no brainer – no error codes, no exceptions, just pay the price of killing the application (if the application is a quick & dirty perl script, then the tradition is to tell it literally to die…)
Alas, this is acceptable only in a limited number of situations. If we encounter a fatal condition and the application can not meaningfully continue, then ok, there is nothing to lose, dump it. For example: a compiler which can not find the input file, or a mail client which can not find the account settings. The best thing they can do is to stop immediately.
But in all other cases, you should not use this kill the application. For example, you can not use this approach in IDA plugins. Imagine a plugin which works with MS Windows PE files. It is natural for such a plugin to check the input file type at the initialization time. This is the wrong way of doing it:
if ( inf.filetype != f_PE )
error("Sorry, only MS Windows PE are supported");
This is bad because as soon as we try to disassemble a file different from PE, our plugin will interfere and abort the whole application, i.e. IDA. This is quite embarrassing, especially for unsuspecting users of the plugin who never saw the source code of the plugin.
The right way of refusing to work is:
if ( inf.filetype != f_PE )
return PLUGIN_SKIP;
If the input file is not what we except, we return an error code. IDA will stop and unload the current plugin. The rest of the application will survive.
Do not let your software be capricious without a reason 🙂