Abstract
The University of Central Florida invention is a photonic matrix accelerator that works with floating-point numbers. Other photonic accelerators only work with fixed-point numbers, significantly limiting the dynamic range of analog neural networks (ANNs). With the UCF floating-point photonic accelerator (FPA), multiplications are performed by coherent mixing and accumulations are performed in the spatial-mode or wavelength domain. The UCF power-efficient floating-point analog tensor accelerator provides a foundational and vertical technology applicable across the spectrum of applications in harnessing artificial intelligence for both commercial and defense applications.
Technical Details
The UCF approach effectively enhances the dynamic range of analog computation. However, repeated updates of the ANN weight matrix are required in training processes, and this configuration always needs high-speed modulation, since accumulations are implemented using time-division multiplexing (TDM), resulting in excessive energy expenditure. This construction has a direct implication to the invention’s scalability. For example, assume that the integration time for accumulation is 200 picoseconds, corresponding to a 5 gigahertz (GHz) clock rate. Using a maximum modulation speed of 500 GHz, the number of weights per column is only 100. Thus, the approach can be scaled to much larger sizes and encode a greater number of exponent levels.
Benefit
Fundamentally resolves the dynamic range disadvantage for analog neural networksCombining encoding with coherent mixing naturally produces multiply-accumulation of floating-point numbersPower efficientMarket Application
Artificial intelligenceNeural network trainingDefense
Brochure