# 2. Unsigned integer

An integer is used to represent a whole number, one that does not have any fractional part. There are two types of integers in computing - unsigned and signed.

Unsigned integers are the simplest to think about, but can only deal with positive numbers. For example, the unsigned 8 bit binary integer

1111 1111

is the equivalent of denary 255.

If the software is only going to deal with positive numbers then this is a very efficient data type to use.

For example, pixel colour can be represented with three positive 8 bit integers - one for Red, one for Green and one for Blue - there are no negative values to be handled.

Many computer languages support two sizes of unsigned integer.

#### Short unsigned integer

This is 16 bits wide ( 2 bytes) which means it can efficiently handle any positive number up to $2^{16} - 1$ which is 65,535. In binary the largest short unsigned integer is

1111 1111 1111 1111

In a typical computer language ('C' for example) a variable of short integer type may be specifically declared like this

short int MyVariable

or the language uses short integer by default, like this

int MyVariable

#### Long unsigned integer

This is 32 bits wide (4 bytes) which means it can handle any positive number up to 4.29 billion or $2^{32} - 1$ to be exact. Long unsigned int is suited to handling very large positve whole numbers.

The disadvantage of using long integer is that it consumes 4 bytes of storage, which is wasteful if all the numbers being used are below 65,535.

Typical use for the long int is as the data type for an auto-incrementing primary key in a database as this can handle more than sixty five thousand records.

In a typical computer language, it may be fully declared like this

unsigned long int MyVariable

**Challenge** see if you can find out one extra fact on this topic that we haven't
already told you

Click on this link: Short and long integer