Home > Blockchain >  Is there a design with fewer similar virtual functions?
Is there a design with fewer similar virtual functions?

Time:01-27

I am writing a library using the Eigen Tensor library which makes use of templates. There is a base class called Layer from which many classes inherit from. Each child class must implement one of the virtual functions such as void Forward(const Tensor<2> &input) and void Backward(const Tensor<2> &gradient).

The thing is each child class accepts one specific input and gradient rank such as Tensor<2>, Tensor<3>, etc, so my base class ends up having

virtual void Forward(const Tensor<2> &input)
virtual void Forward(const Tensor<3> &input)
virtual void Forward(const Tensor<4> &input)
virtual void Backward(const Tensor<2> &gradient)
virtual void Backward(const Tensor<3> &gradient)
virtual void Backward(const Tensor<4> &gradient)

and it's not possible to template virtual functions. Is this poor design and is there a different design that can function similar to

template<int InputRank>
virtual void Forward(const Tensor<Rank> &input)

template<int OutputRank>
virtual void Backward(const Tensor<OutputRank> &gradient)

and allow me to place child objects into a single vector?

CodePudding user response:

If any of the child classes don't implement all of those functions, then that shouldn't be where those functions are defined.

The child classes can inherit from a Layer that doesn't have those functions, and also from

template<size_t Rank> 
struct CanBackpropagate { 
    virtual void Forward(const Tensor<Rank>& input) = 0; 
    virtual void Backward(const Tensor<Rank> gradient) = 0; 
};

Or you have the base class BaseLayer, and

template<size_t Rank> 
class Layer : public BaseLayer {
public:
    virtual void Forward(const Tensor<Rank>& input) = 0; 
    virtual void Backward(const Tensor<Rank> gradient) = 0; 
};

It's OK to have multiple containers for these, e.g. std::vector<Layer*> layers; alongside std::vector<CanBackpropagate<2> *> rank2; etc.

CodePudding user response:

It's possible if you call the virtual function implementation from the base class to have something similar.

Here is how the base class would look like:

class Layer
{
public:

    virtual void ForwardImpl(const void *tensor, size_t nrank) = 0;

    template<size_t nrank>
    void Forward(const std::array<float, nrank> &ntensor) {
        ForwardImpl(&ntensor, nrank);
    }

};

Above we extract the rank and call the virtual function with it.

Here is an example derived class:

class LayerImpl : public Layer {
public:

    template<size_t n>
    void Forward(const void *tensor)
    {
        const std::array<float, n> t = *static_cast<const std::array<float, n>*>(tensor);
        //Implementation should be here
        std::cout << n << std::endl;
    }


    void ForwardImpl(const void *tensor, size_t nrank) override {
        ForwardTmpl<LayerImpl>(this, tensor, nrank);
    }

};

And the glue:

template<class TyName>
inline void ForwardTmpl(TyName *pthis, const void *tensor, size_t nrank) {
    const std::unordered_map<size_t, void (TyName::*)(const void*)> rank_map = {
        {0, &TyName::template Forward<0>},
        {1, &TyName::template Forward<1>},
        {2, &TyName::template Forward<2>},
        {3, &TyName::template Forward<3>},
    };

    // performs Forward for the specific nrank
    (pthis->*rank_map.at(nrank))(tensor);
}

And example main:

int main() {
    Layer *base = new LayerImpl{};

    std::array<float, 2> testen;

    base->Forward(testen);
}

Which in this case pints 2.

Here is also godbolt.

I think however this question is more suited for stack-overflow and not software engineering.

  • Related