• by agar on 1/18/2018, 4:03:49 PM

    Considering the over-the-top language ("a game changer for the computing industry") and questionable or imprecise comments like, "[it] allows representation of real numbers accurate to the last digit" (um, who reads that without thinking of irrational numbers?) it sounds too much like a sales pitch and not like serious research.

    I could be wrong, but based on the similarities to interval arithmetic everyone has already identified, I'm pretty skeptical. At best, this could be a patent on a more efficient way to build interval arithmetic into a CPU architecture rather than a completely new technique.

    As my British friends would say though, I can't be arsed to actually read the patent.

  • by shmolyneaux on 1/18/2018, 4:27:00 PM

    The floating point error problem has not been solved. This patent describes a floating-point representation that includes fields for storing error information. The standard IEEE floating-point representation has three fields: a sign field, an exponent field, and a mantissa (or significand). This patent proposes reducing the size of other fields and adding additional fields to store error information. The error information would be updated by hardware during regular operations. The patent proposed adding a configurable amount of precision to the numbers. If an operation exceeds this limit, an insufficient significant bits signal "sNaN(isb)" would be raised.

    Not only does this method not reduce floating point error, it reduces the precision that you have for any given number of bits.

    Unfortunately I can't find any of the figures referenced in the patent to help me understand the novelty of this patent.

  • by ajennings on 1/18/2018, 3:55:58 PM

  • by simias on 1/18/2018, 3:59:31 PM

    >The inventor patented a process that addresses floating point errors by computing “two limits (or bounds) that contain the represented real number. These bounds are carried through successive calculations. When the calculated result is no longer sufficiently accurate the result is so marked, as are all further calculations made using that value.”

    That does seem useful but it's a bit akin to saying that you've solved the division-by-zero problem by inventing NaN. Suppose you're writing some critical piece of software and a floating point operation raises the "inaccurate" flag, how do you deal with that? Do you at least have access to the bounds computed by the hardware, so that you may decide to pick a more conservative value if that makes sense?

    Besides the link to the "1991 Patriot missile failure" kinds of contradicts the claim that this would solve the issue since Wikipedia says:

    >However, the timestamps of the two radar pulses being compared were converted to floating point differently: one correctly, the other introducing an error proportionate to the operation time so far (100 hours) caused by the truncation in a 24-bit fixed-point register.

    If the problem comes from truncation in a FP register I'm not sure how this invention would've helped.

  • by speps on 1/18/2018, 3:49:31 PM

    And immediately patents it... so no one else can use it.

    EDIT: and for some other methods: https://en.wikipedia.org/wiki/Unum_%28number_format%29, particularly the latest one being the Posit method: http://superfri.org/superfri/article/download/137/232

    EDIT2: of course other people can license it, but the other way to bring a new floating point to the scene would be through the same process that happened with IEEE 754. There are plenty of people who wouldn't touch anything patented at all, sometimes even with a patent clause.

  • by gibrown on 1/18/2018, 4:02:38 PM

    It doesn't actually sound like he "solved" it. More like he put error bounds around it and can detect when the error is more than X.

    > When the calculated result is no longer sufficiently accurate the result is so marked, as are all further calculations made using that value.

    Solving it would be a pretty big deal. This doesn't feel like it is, though I admit I haven't worked on a similar problem in a long time. Kinda feels like patent trolling as I imagine that lots of companies have put bounds on detecting floating point errors when they need it. There are certainly lots of papers on it: https://www.google.com/search?q=floating+point+error+bounds

  • by danbruc on 1/18/2018, 3:55:51 PM

    Without reading the patent it sounds a lot like interval arithmetic [1] which sounds like a really good idea at first but is not without its own problems. For example the inverse 1/x for an interval x like [-1,+1] containing 0 consists of two intervals (-∞,-1] and [+1;+∞).

    [1] https://en.wikipedia.org/wiki/Interval_arithmetic

  • by ktpsns on 1/18/2018, 4:00:44 PM

    Even worse, a patent for a "processor design, which allows representation of real numbers accurate to the last digit" is obviously nonsense. Pi (=3.141...) is a real number where there is no "last digit".

  • by ronnybrendel2 on 1/18/2018, 3:57:15 PM

    Is this https://en.wikipedia.org/wiki/Interval_arithmetic ? I.e. you carry the lower and upper bound all the way?

  • by whyever on 1/18/2018, 4:06:16 PM

    > “In the current art, static error analysis requires significant mathematical analysis and cannot determine actual error in real time,” reads a section of the patent. “This work must be done by highly skilled mathematician programmers. Therefore, error analysis is only used for critical projects because of the greatly increased cost and time required. In contrast, the present invention provides error computation in real time with, at most, a small increase in computation time and a small increase in the maximum number of bits available for the significand.”

    I'm not sure how much it increases computation time, but software for exactly this is freely available, see for instance Arb: https://github.com/fredrik-johansson/arb

  • by sundarurfriend on 1/18/2018, 4:10:51 PM

    > “Apparatus for Calculating and Retaining a Bound on Error During Floating Point Operations and Methods Thereof”

    It seems to be a system where the hardware design itself keeps track of the accuracy losses in floating point calculations, and provides them as part of the value itself.

    The title is (predictably) exaggerated, but it's an interesting idea, and could potentially be a significant improvement in particular use cases.

  • by cwmma on 1/18/2018, 4:00:05 PM

    Patent in case anyone is curious

    https://encrypted.google.com/patents/US9817662

  • by ben11kehoe on 1/18/2018, 4:11:48 PM

    Mathematica has the cool ability to do symbolic tracking of numerical precision, for the ability to tell you when, for example, your differential equation solver is giving you meaningless results.

  • by algorithmsRcool on 1/18/2018, 3:55:29 PM

    At a glance this reads similar to Interval Arithmetic in that it places bounds on how much error a value carries.

    Is there something more novel to his approach?

    https://en.wikipedia.org/wiki/Interval_arithmetic

  • by payne92 on 1/18/2018, 4:37:55 PM

    Here’s the issued patent: https://www.google.com/patents/US9817662

    Note that it’s a claim on the processing unit implementation (e.g. the FPU), not the method.

    Nonetheless, I’d be very surprised if this stands the test of interval arithmetic prior art.

  • by hedora on 1/18/2018, 5:53:07 PM

    Unless the article is missing some important nuances, this is just "range arithmetic" or "interval arithmetic"from the 1950's. Here's a wikipedia page explaining how it works:

    https://en.wikipedia.org/wiki/Interval_arithmetic

  • by Dangeranger on 1/18/2018, 5:06:53 PM

    Wouldn't something like what Douglas Crockford built with DEC64 be more useful and practical?[0]

    [0] http://dec64.com/

  • by chmike on 1/18/2018, 3:59:53 PM

    This looks so obvious. How could this be patented ? The real question is why no one already implemented it ? It wouldn't surprise me if it already exist.

  • by tomxor on 1/18/2018, 6:17:55 PM

    Terrible title with a terrible description of the invention.

    What he is doing appears so be interval arithmetic: https://en.wikipedia.org/wiki/Interval_arithmetic

    Because we don't have infinite computer memory or processing power numbers have to be finite, so no one will ever "solve the floating point error problem" however being able to quantify the error is both extremely useful and extremely complex because you have to try to determine how the error propagates through all of the operations applied over the original input values.

    In science this is also done based on the precision of the raw data... roughly through selecting a sensible number of significant figures in final calculation. In other words they omit all of the digits they deem to be potentially outside of the precision provided by the raw data, e.g your inputs a:123.456 and b:789.012 but your result from some multistep calculation is 12.714625243422799, obviously the extra precision is artificial and should be reduced to something slightly less than the input precision (because it will have been rounded).

    For floating point math this is about going a step further by calculating the propagation of error from the end of the maximum length significand provided by IEEE 754 (where anything longer causes rounding and thus error), and trying to quantify how that window opens wider and wider as those rounding errors propagate towards more significant digits as more operations are performed. With interval arithmetic this is done by keeping track of the upper and lower bounds of that window (the real number existing somewhere within that window).

    This doesn't solve any of the many issues that floating point math has, but it allows whatever is consuming it to potentially assign significance to the output of a calculation more precisely. i.e so that you can say 1369.462628234m is actually 1.4e3m (implying ± 100m) perhaps translating into understanding that your trajectory calculation isn't actually as accurate accurate as the output looks, but instead the target has a variance of up to 100x100 meters.

    I expect the patent details a hardware implementation to make this practical at the instruction level rather than a likely very slow software implementation.

  • by umanwizard on 1/18/2018, 4:59:55 PM

    Obvious crank.

  • by tlb on 1/18/2018, 6:23:24 PM

    I wrote an interval arithmetic package once too. It was slow, because it had to change the FP rounding flags multiple times for some operations.

    In the end, it seemed like any substantial computation ended up having extremely wide bounds, much wider than they deserved. Trying to invert a matrix often resulted in [-Inf .. +Inf] bounds.

  • by pizza on 1/18/2018, 6:21:25 PM

  • by titzer on 1/18/2018, 5:26:33 PM

    He reinvented interval arithmetic, it sounds like.

    Funny. There was a project at Sun Labs in the early 2000s that way far down this road. Without looking at its specifics, I am still surprised that the patent was accepted.

  • by beyondCritics on 1/18/2018, 6:24:10 PM

    This appears to be complete nonsense.

  • by ggggtez on 1/18/2018, 5:20:15 PM

    You can't patent math.

  • by known on 1/18/2018, 4:04:04 PM