Home > Blockchain >  Why does ghc warn that ^2 requires "defaulting the constraint to type 'Integer'?
Why does ghc warn that ^2 requires "defaulting the constraint to type 'Integer'?

Time:12-02

If I compile the following source file with ghc -Wall:

main = putStr . show $ squareOfSum 5

squareOfSum :: Integral a => a -> a
squareOfSum n = (^2) $ sum [1..n]

I get:

powerTypes.hs:4:18: warning: [-Wtype-defaults]
    • Defaulting the following constraints to typeInteger
        (Integral b0) arising from a use of ‘^’ at powerTypes.hs:4:18-19
        (Num b0) arising from the literal ‘2’ at powerTypes.hs:4:19In the expression: (^ 2)
      In the expression: (^ 2) $ sum [1 .. n]
      In an equation for ‘squareOfSum’:
          squareOfSum n = (^ 2) $ sum [1 .. n]
  |
4 | squareOfSum n = (^2) $ sum [1..n]
  |                  ^^

I understand that the type of (^) is:

Prelude> :t (^)
(^) :: (Integral b, Num a) => a -> b -> a

which means it works for any a^b provided a is a Num and b is an Integral. I also understand the type hierarchy to be:

Num --> Integral --> Int or Integer

where --> denotes "includes" and the first two are typeclasses while the last two are types.

Why does ghc not conclusively infer that 2 is an Int, instead of "defaulting the constraints to Integer". Why is ghc defaulting anything? Is replacing 2 with 2 :: Int a good way to resolve this warning?

CodePudding user response:

In Haskell, numeric literals have a polymorphic type

2 :: Num a => a

This means that the expression 2 can be used to generate a value in any numeric type. For instance, all these expression type-check:

2 :: Int
2 :: Integer
2 :: Float
2 :: Double
2 :: MyCustomTypeForWhichIDefinedANumInstance

Technically, each time we use 2 we would have to write 2 :: T to choose the actual numeric type T we want. Fortunately, this is often not needed since type inference can frequently deduce T from the context. E.g.,

foo :: Int -> Int
foo x = x   2

Here, x is an Int because of the type annotation, and requires both operands to have the same type, hence Haskell infers 2 :: Int. Technically, this is because ( ) has type

( ) :: Num a => a -> a -> a

Sometimes, however, type inference can not deduce T from the context. Consider this example involving a custom type class:

class C a where bar :: a -> String

instance C Int     where bar x = "Int: "    show x
instance C Integer where bar x = "Integer: "    show x

test :: String
test = bar 2

What is the value of test? Well, if 2 is an Int, then we have test = "Int: 2". If it is an Integer, then we have test = "Integer: 2". If it's another numeric type T, we can not find an instance for C T.

This code is inherently ambiguous. In such a case, Haskell mandates that numeric types that can not be deduced are defaulted to Integer (the programmer can change this default to another type, but it's not relevant now). Hence we have test = "Integer: 2".

While this mechanism makes our code type check, it might cause an unintended result: for all we know, the programmer might have wanted 2 :: Int instead. Because of this, GHC chooses the default, but warns about it.

In your code, (^) can work with any Integral type for the exponent. But, in principle, x ^ (2::Int) and x ^ (2::Integer) could lead to different results. We know this is not the case since we know the semantics of (^), but for the compiler (^) is only a random function with that type, which could behave differently on Int and Integer. Consider, e.g.,

a ^ n = if n   3000000000 < 0 then 0 else 1

When n = 2, if we use n :: Int the if guard could be true on a 32 bit system. This is not the case when using n :: Integer which never overflows.

The standard solution, in these cases, is to resolve the warning using something like x ^ (2 :: Int).

  • Related