Integer money

The modern concept of money has evolved over a considerable time period. The idea made sense as soon as humans began to be specialists. If I have cows and you have chickens, how many chickens should I expect from you in return for a cow? There are numerous arguments that would result in different answers. The idea of being able to assign a monetary value to a cow and to a chicken is a straightforward way to ease the problem, even if it does not neutralize all the arguments.

The first money was very simple: just individual tokens, which could be used singly or in any number to perform a transaction. But, in due course, it all got more complex …

Before 1971, the currency in the UK was annoyingly complicated. There were 12 pennies in a shilling and 20 shillings in a pound. There were also fractions of a penny: half-penny and farthing [quarter of a penny]. In addition, other anachronistic units were used, like the guinea, which was 21 shillings, the half-crown, which was 2 shillings and 6 pence and the florin which was two shillings. Then there was the bob and the tanner – I could go on … This was all fixed with decimalization, which introduced 100 new pence to the pound.

When the Euro was introduced, it too was defined as a decimal currency with 100 Eurocents to the Euro. This was in line with many pre-Euro currencies in this region, like the French Franc and the German Mark, but different from other, more cumbersome currencies like the Italian Lira. The Lira was always considered rather odd as it was rare to deal with less than 500 or so Lire, as the value of the base unit was so small. But I am wondering if it was not really a model that should have been followed.

What is it with the decimal point? In mathematics generally, it makes sense as the base point for a real number, where there may be an indefinite number digits on either side. But in decimal currencies, there can be no more or less than 2 digits to the right. This means that, for example, $12.34 is OK, but, although $12.3456 could be used in calculations, any transaction would need to be rounded to $12.36. My question is: what is the difference between $12.34 and 1234 cents? The answer is a dot.

I can see no reason why we should not always work in such integer quantities of cents, pennies or whatever for all transactions and prices. Fractions could still be used in calculations if required. What is the argument against this?

Post Author

Posted March 21st, 2013, by

Post Tags

Post Comments

2 Comments

About The Colin Walls Blog

This blog is a discussion of embedded software matters - news, comment, technical issues and ideas, along with other passing thoughts about anything that happens to be on my mind. The Colin Walls Blog

Comments

2 comments on this post | ↓ Add Your Own

Commented on 26 March 2013 at 16:58
By Peter Bushell

“I can see no reason why we should not always work in such integer quantities of cents, pennies or whatever for all transactions and prices. Fractions could still be used in calculations if required. What is the argument against this?”

Those who sell to the public like to use the £x.99 trick. They would be pounds (almost) worse off without it!

Commented on 28 March 2013 at 10:05
By Colin Walls

Peter: I think that prices could still end in 99 and get the same effect. BTW, I wrote about the .99 effect a while back: http://blogs.mentor.com/colinwalls/blog/2009/08/06/the-99-phenomenon/

Add Your Comment

You must be logged in to post a comment.

Archives