C++ Templates: Using Templates to Generate the Fibonacci Sequence

The follow code demonstrates a method for generating the Fibonacci Sequence at compile time. Since the sequence is generated at compile time runtime execution of retrieving a number is constant time.

There is a recursive generation of templated types in this code such that the instantiation of a template: Fib<10> f; generates 11 classes, namely: Fib<0> through Fib<10>.

The nature of the class generation by the compiler means that previous Fibonacci values are effectively cached, making the compile time go much faster. Therefore, this is pretty much the fastest Fibonacci generator I have seen, even if you include compile time.

#include <iostream>
#include <cassert>

using namespace std;

template<int stage>
struct Fib
{
  //Make this value a constant value equal to the (stage-1) + (stage -2)
  //which the compile will generate for us and save in the types of:
  // Fib<stage-1> and Fib<stage-2>. This all works because stage is known at compile
  // time, as all template parameters must be.
  static const uint64_t value = Fib<stage-1>::value + Fib<stage-2>::value;

  static inline uint64_t getValue(int i)
  {
    if (i == stage) // Does the current class hold the given place?
    {
      return value;  // Return it!
    } else {
      return Fib<stage-1>::getValue(i); // Get it from the previous class!
    }
  }
};

template<> // Template specialization for the 0's case.
struct Fib<0>
{
  static const uint64_t value = 1;

  static inline uint64_t getValue(int i)
  {
    assert(i == 0);
    return 1;
  }
};

template<> // Template specialization for the 1's case
struct Fib<1>
{
  static const uint64_t value = 1;

  static inline uint64_t getValue(int i)
  {
    if (i == 1)
    {
      return value;
    } else {
      return Fib<0>::getValue(i);
    }
  }
};

int main(int, char *[])
{
  //Generate (at compile time) 100 places of the Fib sequence.
  //Then, (at runtime) output the 100 calculated places.
  //Note: a 64 bit int overflows at place 92
  for (int i = 0; i < 100; ++i)
  {
    cout << "n:=" << i << " => " << Fib<100>::getValue(i) << endl;
  }
}

Comments

Have you looked at the code produced by this? Is it really constant-time?

Seems like for the 100th fib number, it would execute 100 compares... that's not constant time, that's O(n).

I think the fastest Fib generator would be to pre-generate the numbers and output it as a static array that you include in your code.

it is O(N) time to look up each element at run time.

Here is a constant time implementation (the number generation happens at compile time):

#include

using namespace std;

template
struct Fib
{
enum { Num1 = Fib::number,
Num2 = Fib::number };

static const int number = Fib::Num1 + Fib::Num2;
};

template<>
struct Fib<0>
{
static const int number = 1;
};

template<>
struct Fib<1>
{
static const int number = 1;
};

int main(void)
{
cout << Fib<10>::number << endl;
}

That actually almost exactly what I did. The problem is you show only 1 value lookup. Update your example to show multiple outputs. Specifically, I would like to see your example output the first 100 numbers in a loop.

-Jason

I think "Fib<100>::getValue(i)" in the for loop is incorrect.. Shouldn't be like this? "Fib::getValue(i)"

I think "Fib<100>::getValue(i)" in the for loop is incorrect.. Shouldn't be like this? "Fib<i>::getValue(i)"

Try compiling your suggested version.

Template parameters must be known at compile time. So, in my example, I instantiate a compile time version of Fib that "knows" the first 100 places. Then, I extract the desired element from it with the "getValue" function.

I've just posted blog about generation of Fibonacii sequence. This sequence is stored in a tuple at compile time. I didn't tested how fast algorithm is, but I assume that should be faster (and more elegant IMHO ) than algorithm described above.

I've just posted blog about generation of Fibonacii sequence. http://sigidagi.blogspot.com/2011/06/variadic-templates-and-fibonacii_18...
This sequence is stored in a tuple at compile time. I didn't tested how fast algorithm is, but I assume that should be faster (and more elegant IMHO ) than algorithm described above.