Home > Back-end >  Is it bad practice to use an abstract base class to enforce a common interface for a template parame
Is it bad practice to use an abstract base class to enforce a common interface for a template parame

Time:09-17

I'm new to template programming, and have stumbled upon this idiom. For example, something like:

class FooBase {
 public:
  virtual void do_something() = 0;
};

template <class Foo> // Foo is derived from FooBase
void g(Foo& foo) {
  static_assert(std::is_base_of<FooBase, Foo>::value);
  // ...
  foo.do_something();
  // ...
}

In my own opinion, this pattern is useful because:

  1. The pure virtual function declaration has an explicit signature; compared to a C 20 concept requires clause (if even available), it is easy to specify attributes, arguments, and the return type.
  2. This is good documentation; the requirements for defining a new Foo class are clear from just reading the FooBase header.
  3. Can refactor common functionality of all Foo's into the same FooBase class.

However I'm concerned about the performance implications of using a virtual function. My understanding is there is no cost at runtime, since the function is called from the derived class; but the linker will be unable to inline the function.

Another downside is that virtual template functions are not allowed.

Finally, I worry this may be a code smell in general since it uses a feature for runtime-polymorphism to perform static checking. C 20 concepts are probably the "correct" way to do this, but they seem much less convenient, for the three reasons above.

CodePudding user response:

It's bad practice because you're using the wrong tool for the job. You're writing code that is communicating the wrong things, and with no actual enforcement mechanism.

The point of dynamic polymorphism is that it's dynamic: defined at runtime such that you can pass objects to functions that don't fully know the type they're given at compile time. This allows code in one location to be written against the base class and not be exported into headers and the like. And the actual class being used at any point can also not be exported into headers and the like. Only the source of the class needs to know the actual type.

Compile-time polymorphism like templates is predicated on knowing everything at compile-time. All of the code, both source and destination, needs to know what the types are.

In essence, you like the way dynamic polymorphism spells out its requirements compared to compile-time polymorphism, but you try to side-step the performance costs of it by using compile-time polymorphism. This creates code confusion, as you're mixing up mechanisms.

If someone sees a base class with virtual functions, they're going to assume that your code will be passing pointers/references to the actual classes around. That's part of the expectation with using them in most cases (even virtual-based type erasure effectively do this). Seeing instead a bunch of template functions that take types by value will be confusing.

Additionally, you have no enforcement mechanism. A function which takes a FooBase& ensures that a user cannot call it with a type that is not an actual FooBase-derived type (well, you could make it implicitly convertible to one, but let's ignore perfidy here). Your template functions, eschewing concepts, have no similar enforcement. You can document that it must be a FooBase-derived type, but you don't statically enforce it.

At the very least, the template parameter should be declared as std::derived_from<FooBase> Foo (and no, static_assert is not a good idea).

But really, you should just use a proper concept. You want compile-time polymorphism, and whatever your personal feelings on concepts are, concepts are the language mechanism C 20 has for defining the prototype for compile-time polymorphism. Nobody will be confused as to the meaning and intent of your code if you use concepts.

  • Related