Home > Blockchain >  Assigning constrained literal to a polymorphic variable
Assigning constrained literal to a polymorphic variable

Time:10-25

While learning haskell with Haskell Programming from first principles found an exercise that puzzles me.

Here is the short version:

For the following definition:
   a) i :: Num a => a
      i = 1
   b) Try replacing the type signature with the following:
      i :: a

The replacement gives me an error:

error:
    • No instance for (Num a) arising from the literal ‘1’
      Possible fix:
        add (Num a) to the context of
          the type signature for:
            i' :: forall a. a
    • In the expression: 1
      In an equation for ‘i'’: i' = 1
   |
38 | i' = 1
   |      ^

It is more or less clear for me how Num constraint arises. What is not clear why assigning 1 to polymorphic variable i' gives the error.

Why this works:

id 1

while this one doesn't:

i' :: a
i' = 1
id i'

Should it be possible to assign a more specific value to a less specific and lose some type info if there are no issues?

CodePudding user response:

This is a common misunderstanding. You probably have something in mind like, in a class-OO language,

class Object {};

class Num: Object { public: Num add(...){...} };

class Int: Num { int i; ... };

And then you would be able to use an Int value as the argument to a function that expects a Num argument, or a Num value as the argument to a function that expects an Object.

But that's not at all how Haskell's type classes work. Num is not a class of values (like, in the above example it would be the class of all values that belong to one of the subclasses). Instead, it's the class of all types that represent specific flavours of numbers.

How is that different? Well, a polymorphic literal like 1 :: Num a => a does not generate a specific Num value that can then be upcasted to a more general class. Instead, it expects the caller to first pick a concrete type in which you want to render the number, then generates the number immediately in that type, and afterwards the type never changes.

In other words, a polymorphic value has an implicit type-level argument. Whoever wants to use i needs to do so in a context where both

  1. It is unambiguous what type a should be used. (It doesn't necessarily need to be fixed right there: the caller could also itself be a polymorphic function.)
  2. The compiler can prove that this type a has a Num instance.

In C , the analogue of Haskell typeclasses / polymorphic literal is not [sub]classes and their objects, but instead templates that are constrained to a concept:

#include <concepts>

template<typename A>
concept Num = std::constructible_from<A, int>; // simplified

template<Num A>
A poly_1() {
  return 1;
}

Now, poly_1 can be used in any setting that demands a type which fulfills the Num concept, i.e. in particular a type that is constructible_from an int, but not in a context which requires some other type.

(In older C such a template would just be duck-typed, i.e. it's not explicit that it requires a Num setting but the compiler would just try to use it as such and then give a type error upon noticing that 1 can't be converted to the specified type.)

CodePudding user response:

tl;dr

A value i' declared as i' :: a must be usable¹ in place of any other value, with no exception. 1 is no such a value, as it can't be used, say, where a String is expected, just to make one example.

Longer version

Let's start form a less uncontroversial scenario where you do need a type constraint:

plus :: a -> a -> a
plus x y = x   y

This does not compile, because the signature is equivalent to plus :: forall a. a -> a -> a, and it is plainly not true that the RHS, x y, is meaningful for any common type a that x and y are inhabitants of. So you can fix the above by providing a constraint guaranteeing that is possible between two as, and you can do so by putting Num a => right after :: (or even by giving up on polymorphic types and just change a to Int).

But there are functions that don't require any constraints on their arguments. Here's three of them:

id :: a -> a
id x = x

const :: a -> b -> a
const x _ = x

Data.Tuple.swap :: (a, b) -> (b, a)
Data.Tuple.swap (a, b) = (b, a)

You can pass anything to these functions, and they'll always work, because their definitions make no assumption whatsoever on what can be done with those objects, as they just shuffle/ditch them.


Similarly,

i' :: a
i' = 1

cannot compile because it's not true that 1 can represent a value of any type a. It can't represent a String, for instance, whereas the signature i' :: a is expressing the idea that you can put i' in any place, e.g. where a Int is expected, as much as where a generic Num is expected, or where a String is expected, and so on.

In other words, the above signature says that you can use i' in both of these statements:

j = i'   1
k = i'    "str"

So the question is: just like we found some functions that have signatures not constraining their arguments in any way, do we have a value that inhabits every single type you can think of?

Yes, there are some values like that, and here are two of them:

i' :: a
i' = error ""

j' :: a
j' = undefined

They're all "bottoms", or ⊥.


(¹) I mean, you write it in some place where the code compiles and the code keeps compiling.

  • Related