So, can I get your Number? (all about numbers in AS3)

It doesn’t really get more basic than Numbers. Everybody can count. I’m sure your first program probably involved feeding numbers into the computer in some way (after the obligatory “Hello World” experience of course). So why a post about Numbers? Well before I insult your intelligence further, it’s important to know that in the computer’s eyes, not all numbers are equal. Most of the time this makes no difference what so ever, but because numbers are a corner stone of programming you will inevitably encounter that one time when it does. Hopefully I can save you some of the subsequent head bashing with this post, because head bashing is fun like scraping paint with your eyeballs is fun.

A bit of history for the uninitiated

In actionscript numbers take the form of three data types. Anybody with even a little experience knows that these are Number, int and uint. When first starting out I remember wondering what the point of different number types was, why not just use a single number type – Number? The answer comes down to memory, and the fact that computers aren’t very good at counting. The language of computers is of course the 0′s and 1′s binary, so in a sense you could say counting is the only thing computers do, but 1 is the limit of the value they can store on a single bit. To get higher numbers you need to start combining bits. The result is you use a second bit to count the number of the first bits.

So in effect the binary “10″ translates to:

10 (binary) = "we have one group of two + zero groups of one (2+0)" = 2

This is exactly the same way our own numbering system works, except for the fact that our own number system can count up to ten. So in other words:

101 (binary) = to "we have one group of four + zero groups of two + one group of one (4+0+1) = 5"
101 (numbers) = to "we have one group of hundred + zero groups of ten + one group of one (100+0+1) = 101"

By continually adding binary bits in this manner we eventually arrive at the following:

11111111 = "128+64+32+16+4+2+1" = 255.

And thus we arrive at the fact that 8 bits in a byte gives us 256 possible values (computers start counting at zero). Sounds familiar doesn’t it? Understanding this is also happens to be the key to understanding the mystery of bitwise operators.

To count higher, we obviously have to start adding bytes together. So in this way a 32bit number can actually store 256*256*256*256 = 4294967295 values. This happens to be the exact value range of int and uint. In this way when we declare an int or unit it’s basically a contract with the computer that the value of the number is never going to require more than four bytes, and it allows the computer to optimally assign these bytes in memory. The difference between the int and uint is that the unit value ranges starts from 0 while the int values are split equally into negative and positive numbers to provide a more workable range of values. It’s at this point that you might realise the 0xAARRGGBB hex values used for colours map all the values of uint, and in this way all colours in flash can be represented by 32bit uint numbers. As a curious side note, the same way binary is based on a 2 counting base (or radix as it’s technically called), and our decimal system is based on a 10 base, hexadecimal uses a 16 counting base. So that means 0xFF = “15 groups of 16 + 15 ” = 255 and is a short hand method of representing the value of a byte. Again this is how we use bitwise operators to separate a colour into it’s constituent values.

Back to practicalities

Of course when you set a limit on the number of values you can represent, it’s conceivable you could exceed this limit – especially after you start adding decimal places. Enter the venerable Number, which is 64bit based and so provides a much wider range of workable values. In theory choosing appropriately between number types should allow you to optimise code, but in the benefits have been dubious until version ten of flash player. In other languages such as the C languages, which were born from an age where every byte was crucial, you have far more number types to work with – each with it’s own memory footprint. In fact even the concept of Number gets broken down into types such as float, double, long double. The long and short of it though is that working with numbers is just about the fastest thing a computer can do, and usually its safe to go with the number type, unless the case for int is obvious, such as looping through and array, or you are working with performance sensitive code.

Sorry, but you're not my type

As you start to develop as an Object Orientated programmer, you may have also thought of using the different number types to your advantage, to help create type safety within the data getting passed around. Be warned though, the results might not be what you expect. Say for instance you have a function that will break if you pass anything other than a positive whole number. At this point the light bulb illuminates and you decide to declare the function so it only takes a unit as an argument. You can now rest secure in the knowledge you function won't break - job done! Ideally for this scenario to work the compiler should throw an error when you try pass a Number or int to the function, but instead (as with primitive types) the compiler will try to auto convert the number types, and this is where the weird comes in.

You can of course convert uint > int > Number to your hearts content without any ill effects, but going the other way will cause problems. If you convert a Number to an int, it will simply drop every thing after the decimal place. You could use this quick and dirty way to round the number, but it is safer to use one of the math functions to be sure you are using the correct rounding method. You can also convert a positive int to uint without any ill effects, but a negative int will actually wrap around causing a massive uint value:

	var number:Number = 23.84;
	var posInt:int = number;
	var posIntMath:int = Math.round(number);
	trace(posInt); // 23
	trace(posIntMath); //24
	
	number = -23.84;
	var negInt:int = number;
	var negIntMath:int = Math.round(number);
	trace(posInt); // -23
	trace(negIntMath); //-24

	var uInteger:uint = negInt;
	trace (uInteger); //4294967273 - Not groovy at all!

An existential crisis

The next subtlety of type conversion you might encounter is when trying to test the existence of a number, in much the same way you might test if a variable is null before executing a piece of code. If things were consistent, you would expect declaring a number would be enough for it to convert to a true Boolean by virtue of the fact that you explicitly declared the number. How ever it is important to note that if you cast zero to a boolean it actually evaluates to false. This again goes back to the fundamental basics of computers where 0 = false and 1 = true. Not only this, but a number doesn't have a null type, but rather uses a special type for a non existent value, namely NaN (Not a number). Okay, so now we're making progress - we know that if we want to evaluate the existence of a number we should check for NaN, not null. We do this by using the isNaN() function. But the confusion doesn't end there, NaN only works with the Number type, not int or uint. As soon as you try reference a int or uint, it is automatically assigned the value of zero (remember how we mentioned the computer assigns these to memory in an optimal way?)

	var number1:Number;
	if (number1) trace(true);
	else trace(false); //false
	
	number1 = 0;
	if (number1) trace(true);
	else trace(false); //false - number is declared, would expect true
	
	var number2:Number;
	if (!isNaN(number2)) trace(true);
	else trace(false); //false
	
	number2 = 0;
	if (!isNaN(number2)) trace(true);
	else trace(false); //true - number is declared, evals as expected
	
	var integer:int;
	if (!isNaN(integer)) trace(true);
	else trace(false); //true - opposite of what we expect
	
	integer = 0;
	if (integer) trace(true);
	else trace(false); //false - same as number

Making a call

One final subtlety to note with numbers once again comes back to memory. When you declare a variable (var a) and assign it a value, basically what you're doing is defining a reference to a particular value stored in memory. When you then duplicate this variable (var b), there are two ways the program can respond. First it can simply create a new reference to the same memory location, so if var a gets changed, var b changes along with it. The alternative is that the memory value is copied to a new location, which var b then references. This means that if you change the original variable var b will remain unchanged. More advanced languages actually allow you to define how you want you variables to react, but in Actionscript it's hardwired that primitive data types adopt the latter behaviour:

	var a:Number = 10;
	var b:Number = a;
	a += 5;
	trace(a); //15 - modified
	trace(b); //10 - unchanged

There's a good chance you've seen something like the above before, however the implications become more obvious when you start writing your own Math functions. Because copying a number variable results in a new value being created, passing a number to a function actually creates a duplicate value. Which means in order to actually work with the result of the function, you need to reassign it to the initial variable:

	var a:Number = 10;
	double(a); //performs the function, and a new value in memory will be doubled, but now we have no access to it.
	trace(a); //10
	a = double(a); //be reassigning the value, we now have access to it. 
	trace(a) //20
	
	function double(num:Number):Number {
		return num *= 2;
	}

That's all for now, go forth and multiply! (Okay I'll stop now).