Curiously, there are several notions of a type.
In programming languages, we typically think of types as the interpretation of a term, and might attach them to some set, for example -- the integers. A more elegant definition, from model theory, focuses on the type of logical formulas that the elements of some universe satisfy. In other words, types are just sets of formula. Since individual integers are definable (without explicitly naming them) in the standard language, this means that they would have different "types", though in PL we often consider them the same.
This fanciness essentially just tells us that we can tell apart different integers, through asking if they satisfy some formula. For example, you can figure out what 0 is with just addition, asking if it holds that your candidate is the identity. On the other hand, you can't define something like < with just addition, so you end up stuck when trying to figure out what 1 is.
A lot of this minutiae is lost when we think about things as they have been built, and not why they were built.