I think you might have your view on binary applications in hardware slightly skewed.
You can store from 0 to (2^(#bits))-1 in a value that is #bits wide.
You are subtracting 2 for some reason every time you do a calculation. I didnt really read everything but I noticed you repeatedly did this.
For example, an 8 bit number can be from 0 to 255, not 0 to 254, same with 16 bit, 32 bit etc.
This is of course, assuming an unsigned value, which in the scope of what you are talking about, they are unsigned.