c++ - VBA debugger precision -


i had single believe c++ equivalent float in vba in excel workbook module. anyways, value assigned (876.34497) rounded off 876.345 in immediate window, , watch, , hover tooltip when set breakpoint on vba. however, if pass single c++ dll c++ reports original value 876.34497.

so, stored in memory original value? limitation of debugger? unsure going on here. makes difficult test if i'm passing i'm getting on c++ side.

enter image description here

i tried:

?cstr(test) 876.345 ?cdbl(test)  876.344970703125  ?csng(test)  876.345  

vba isn't straightforward, @ level must stored 876.34497 in memory. otherwise, don't think cdbl correct is.

vba variables of type "single" stored "32-bit hardware implementation of ieee 754[-]1985 [sic]." [see: https://msdn.microsoft.com/en-us/library/ee177324.aspx].

what means in english is, "single" precision numbers converted binary truncated fit in 4 byte (32-bit) sequence. exact process described in wikipedia under http://en.wikipedia.org/wiki/single-precision_floating-point_format . upshot single precision numbers expressed

(1) 23 bit "fraction" between 0 , 1, *times* (2) 8-bit exponent represents multiplier between 2^(-127) , 2^128, *times* (3) 1 more bit positive or negative.   

the process of converting numbers binary , causes 2 types of rounding errors:

(1) significant digits -- have noticed, there limit on significant digits. 22 bit integer can have 8,388,607 unique values. stated way, no number can expressed greater +/- 0.000012% precision. reaching high school science, may recall that way of saying cannot count on more 6 significant digits (well, decimal digits, @ least ... of course have 22 significant binary digits). representation of number more 6 significant digits rounded off. however, won't rounded off nearest decimal digit ... rounded off nearest binary digit. causes unexpected results (like yours).

(2) binary conversion -- other type of error more pernicious. there numbers less 6 (decimal) digits rounded off. example, 1/5 in decimal 0.2000000. never gets "rounded off." same number in binary 0.00110011001100110011.... repeating forever. (that sequence equivalent 1/8 + 1/16 + 1/16*(1/8+1/16) + 1/256*(1/8+1/16) ... ) if used arbitrary number of binary digits represent 0.20, converted decimal, never 0.20. example, if used 8 bits, have 0.00110011 in binary is:

  0.12500000   0.06250000   0.00781250 + 0.00390625 ------------   0.19921875 

no matter how many binary digits use, never 0.20, because 0.20 cannot expressed sum of powers of two.

that in nutshell explains what's going on. when assign 876.34497 "test," gets converted internally to:

1 10001000 0110110001011000010011      136       5,969,427 

which (+1) * 2^(136-127) * (5,969,427)/(2^23)

excel automatically truncating display of single-precision number show 6 significant digits, because knows seventh digit might wrong. can't tell number because excel doesn't display enough significant digits! point.

when coerce value double precision, uses entire binary string , adds 4 bytes worth of zeroes end. allows display twice many significant figures because double precision, can see, conversion 8 decimal digits 23 binary digits , appending long string of zeros has introduced errors. not errors, if understand it's doing; artifacts. after all, it's doing told ... didn't know telling do!


Comments

Popular posts from this blog

javascript - AngularJS custom datepicker directive -

javascript - jQuery date picker - Disable dates after the selection from the first date picker -