A fault tolerance technique for feedforward neural networks
Ekong, Donald Uwemedimo
The use of neural networks in critical applications necessitates that they continue to perform their tasks correctly despite the possible occurrence of faults. The objectives of this dissertation were to develop a technique for fault tolerance in feedforward neural networks, and to compare the new technique with existing techniques. This new technique is designed such that it makes use of existing fault tolerance techniques of digital circuits to complement the inherent fault tolerance attributes of neural networks. A fault tolerance technique which has concurrent error detection and correction capabilities, as well as error masking capability, is proposed for feedforward networks. The activation of each hidden and output neuron is computed by three separate self-testing processors (PEs). A neuron's output is obtained by comparing the computation and the test results of its PEs. The comparison enables the detection of computation errors, even if most of the PEs' results are wrong. Tests were performed in which bit errors were injected into floating-point weights of trained networks that used the proposed fault tolerance technique and other techniques. Only networks of the proposed technique were able to perform all their tasks correctly in the presence of faults. Analysis of reliability, as well as hardware and timing overhead were also performed on the proposed implementation. While additional hardware and computation time are needed, the use of this proposed technique can lead to an increase in reliability. The proposed technique is a significant improvement over existing techniques because it uses comparisons of both the computation and test results of PEs. to enhance the fault tolerance of neural networks.