Floating point numbers are approximations of real numbers that can represent larger ranges than integers but use the same amount of memory, at the cost of lower precision. If your question is about small arithmetic errors (e.g. why does 0.2 + 0.1 equal 0.300000001?) or decimal conversion errors, please read the "info" page linked below before posting.

- Stackoverflow.com Wiki
3 articles, 0 books.

Floating point numbers are everywhere. It’s hard to find software that doesn’t use any. For something so essential to writing software you’d think we take great care in working with them. But generally we don’t. A lot of code treats floating point as real numbers; a lot of code produces invalid results.

Most commodity processors support single-precision IEEE 754 floating-point numbers. Though they are ubiquitous, they are often misunderstood.

You have a list of floating point numbers. No nasty tricks - these aren’t NaN or Infinity, just normal “simple” floating point numbers. Now: Calculate the mean (average). Can you do it? It turns out this is a hard problem. It’s hard to get it even close to right. Lets see why.