The Reservoir Computing (RC) framework is said to have the potential to transfer onto any input-driven dynamical system, provided two properties are present: (i) a fading memory, and (ii) input separability. A typical reservoir consists of a fixed network of recurrently connected processing units; however recent hardware implementations have shown reservoirs are not ultimately bound by this architecture. Previously, we have demonstrated how the RC framework can be applied to randomly-formed carbon nanotube composites to solve computational tasks. Here, we apply the RC framework to an evolvable substrate and compare performance to an already established in materia training technique, referred to as evolution in materia. The results show that by adding the programmable reservoir layer, reservoir computing in materia can significantly outperform the original evolution in materia implementation. This suggests the RC framework offers improved performance, even across non-temporal tasks, when combined with the evolution in materia technique.