Before somebody marks this as a duplicate, I know how to fix it (and that has been answered already), but I'd like to understand why this limitation exists, and I found no answer on here or in the Rust docs.
For example, I wrote something that goes something like this:
struct ItemList<T> {
items: Vec<T>
}
impl<T> ItemList<T> {
fn getFirstItem(&self) -> Link<T> { Link { position: 0 } }
}
struct Link<T> {
position: usize
}
impl<T> Link<T> {
fn getFromList<'a>(&self, list: &'a ItemList<T>) -> &'a T {
&list.items[self.position]
}
}
But rustc rejects my code with this error:
error[E0392]: parameter `T` is never used
--> src/main.rs:8:13
|
8 | struct Link<T> {
| ^ unused parameter
|
= help: consider removing `T`, referring to it in a field, or using a marker such as `PhantomData`
= help: if you intended `T` to be a const parameter, use `const T: usize` instead
For more information about this error, try `rustc --explain E0392`.
Why is that an error and not a warning? The type parameters only reduce the performance during compile time (if I understood it correctly), so why would you enforce removing it or using a PhantomData marker? Does it have some implications that I missed?
CodePudding user response:
It's a variance issue.
Rust determines the variance of a type parameter by its usage. If the type is not used, Rust cannot know the variance of the type.
The Rustonomicon has a chapter about variance.
CodePudding user response:
Adding to what @mcarton said, this was not this way from the beginning: Rust used to infer unused generic parameters are bivariant. This changed in 2014 (before Rust 1.0). The motivation is explained in the RFC:
Motivation
Today, variance inference for lifetimes includes the notion of bivariance -- which essentially amounts to unconstrained. In principle, this can have some use, but in practice it tends to be a vector for bugs. In fact, there is no known Rust code that intentionally uses bivariance (though there seems to be plenty that does so accidentally and incorrectly). This RFC proposes that we simply make an inference result of bivariance an error.
As an example of where this comes up, imagine a
struct
with a "phantom" lifetime parameter, meaning one that is not actually used in the fields of thestruct
itself. One example of such a type isItems
, the vector iterator:struct Items<'vec, T> { x: *mut T }
Here the lifetime
'vec
is intended to represent the lifetime of the vector being iterated over and hence to prevent the iterator from outliving the container. However, because it does not appear in the body ofItems
at all, the compiler would currently consider it irrelevant to subtyping. This means that you could convert from aItems<'a, T>
to aItems<'static, T>
, causing the iterator to outlive the container it is iterating over.To prevent this scenario, the
actual
definition of the iterator in the standard library uses a marker type. The marker type informs the compiler that, although'vec
does not appear to be used, it should act as if it were. For example,Items
might be modified as follows:struct Items<'vec, T> { x: *mut T, marker: marker::CovariantType<&'vec T>, }
the
CovariantType
marker basically informs the compiler that it should act "as though" a reference of type&'vec T
were a member of Items, even thought it is not. Another equivalent option here would beContravariantLifetime
.Currently, the user must know to insert these markers or else silently get the wrong behavior. This RFC makes it an error to have a type or lifetime parameter that is not (transitively) used somewhere in the type. Nothing else is changed.
The code is pretty old (e.g. CovariantType
and ContravariantLifetime
were replaced by PhantomData
), but the concept still applies.