Abstract
This paper describes techniques for the hardware implementation of a Correlation Matrix Memory (CMM), which is a fundamental element of a binary neural network. For large scale problems the CMM algorithm requires dedicated accelerating hardware to maintain the processing rates required. This paper describes the C-NNAP architecture, which provides processing rates nearly eight times faster than a modern 64-bit workstation. The C-NNAP architecture hosts a dedicated FPGA processor to perform the bit summing operation. The system is modular so that multiple boards can provide a more powerful platform.
| Original language | English |
|---|---|
| Title of host publication | FIFTH INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS |
| Place of Publication | EDISON |
| Publisher | INST ELECTRICAL ENGINEERS INSPEC INC |
| Pages | 161-166 |
| Number of pages | 6 |
| ISBN (Print) | 0-85296-690-3 |
| Publication status | Published - 1997 |
| Event | 5th International Conference on Artificial Neural Networks - CAMBRIDGE Duration: 7 Jul 1997 → 9 Jul 1997 |
Conference
| Conference | 5th International Conference on Artificial Neural Networks |
|---|---|
| City | CAMBRIDGE |
| Period | 7/07/97 → 9/07/97 |