Close Enough is Good Enough

October 2005

0. Table Of Contents

1. Introduction

I've worked on a few science-related projects at work and at home over the past decade, and there's one theme that seems to keep repeating on me. In most fields, there's some problem that is considered to be "very hard". That is to say that it would take a computer a very long time to figure out the exact answer.

A couple of years back, I took a class on "Genetic Algorithms". One of the things I learned in that class was that the main theme of using GA's to solve a problem is that you can very quickly find a solution to a problem that is "very close". In most instances, this "very close" solution is good enough.

2. π

Here's a good, simple example; Pi. (π) When using it in computations, people are always abbreviating it down to just two decimal places (3.14) sometimes even using a very inaccurate approximation (22/7). Where else would you abbreviate a value like "3.14159265358979323846..." to "3.142857" (22/7)? We do it because the accuracy is usually not needed. Many calculations aren't accurate past the second decimal place anyway, so why bother using the more precise value when that information will ostensibly be ignored. I've found that this is the case with many of the non-published scientific results that need to be computed.

π is a perfect example, if we look at it from another angle as well...

You can use a GA to iteratively find "close" solutions to problems, as mentioned above. Along these lines, you can use a GA to determine π. There are many ways to do this, so it will be left as an exercize for the reader to refer to a specific one, but the following is based on any of them. The first few iterations/generations of the GA will give results like 3.2, 3.0, and such, but after a few iterations, it will have quite a few decimal points of precision. This occurs much quicker than other computational methods to get those first few decimal points, which for most uses of π is all you need anyway.

If you were to extrapolate from this, and think of another computationally expensive function, it might conceivably take hours to determine a high precision answer to the problem (if an answer is even known or computable, that is. A "close enough" solution might be achieved in minutes, or even seconds.

A combination of the two might also be applicable to your problem. If there are a handful of computations that all hinge on one 'expensive' computation to be completed first, it might be worth it to compute out a "close enough" answer first, run it through the rest of the computation, and use that as a starting point. While the rest of the computation completes, you can compute the higher precision solution, then just apply any differences later. In many instances, this is a good way to get going towards your goal.

Richard Feynman used to do something similar. He was able to seemingly give very precise answers to mathematical problems very quickly. Simplified, if he were given "22 / 7", he would first realize it was 3 and some change, so he'd immediately say "Three point..." then he'd compute the remainder in his head, 1/7, which he might know a shortcut for figuring out "one.. four..." and so on.

3. Not just for mathematics...

This idea also applies in other situations as well.

I know that there are times when I'm working on a software project that I know that I could spend hours, days, even weeks designing the software, attempting to figure out every possible issue and problem that might arise. Only after all of that, I could start writing the software, and invariably, some other situation will arise that was not considered at design time.

Instead of that, If I spend an hour or two to come up with a sketch of where I need to go, and then roughly follow that sketch, I will end up in the same place much sooner. At that time, design corrections can be made, code can be rewritten if necessary, and if really necessary, a second, better design might be implemented instead. This 'close enough' solution might also end up being good enough for the final version of the software... at least for a proof of concept design anyway. In the industry, "Extreme Programming" is based on this methodology.

We also do this in other completely unrelated situations. If you needed to get to a certain address in Los Angeles, but you were in New York, you might sit down and compute your entire path to get from here to there, or you might grab a map, hop in your car, and head West on, say interstate 80. You know it will get you going West, which is generally where you need to go. Once you get closer, you can figure out the details of getting exactly where you need to go, but you get a good, quick start, without any loss of time or effort.

Every time a player in a sport throws a ball, the exact trajectory and velocity of the ball are not computed... It would take way too long to do, and sporting events would be very boring. Instead, the athlete makes an approximation of how much velocity and such the ball need to get approximately where they need to go. Most of the time this is 'good enough'.

4. Conclusion

The next time you need to find the solution to the problem, first consider if you really need to know the exact answer, or if a much less expensive, but not-as-precise solution might be adequate instead.