Software

legume

legume

legume (le GUided Mode Expansion) is an open source Python package that implements a differentiable guided mode expansion (GME) method for multi-layer optical structures [1]. Legume also implements a differentiable plane wave expansion (PWE) method for purely two-dimensional periodic structures. Legume uses the HIPS autograd package for its automatic differentiation capabilities.

  1. Inverse Design of Photonic Crystals through Automatic Differentiation
    Momchil Minkov, Ian A. D. Williamson, Lucio C. Andreani, Dario Gerace, Beicheng Lou, Alex Y. Song, Tyler W. Hughes, Shanhui Fan
    arXiv:2003.00379 [physics]

    arXiv PDF

    Gradient-based inverse design in photonics has already achieved remarkable results in designing small-footprint, high-performance optical devices. The adjoint variable method, which allows for the efficient computation of gradients, has played a major role in this success. However, gradient-based optimization has not yet been applied to the mode-expansion methods that are the most common approach to studying periodic optical structures like photonic crystals. This is because, in such simulations, the adjoint variable method cannot be defined as explicitly as in standard finite-difference or finite-element time- or frequency-domain methods. Here, we overcome this through the use of automatic differentiation, which is a generalization of the adjoint variable method to arbitrary computational graphs. We implement the plane-wave expansion and the guided-mode expansion methods using an automatic differentiation library, and show that the gradient of any simulation output can be computed efficiently and in parallel with respect to all input parameters. We then use this implementation to optimize the dispersion of a photonic crystal waveguide, and the quality factor of an ultra-small cavity in a lithium niobate slab. This extends photonic inverse design to a whole new class of simulations, and more broadly highlights the importance that automatic differentiation could play in the future for tracking and optimizing complicated physical models.


GitHub Repository

vtmm

vtmm

vtmm is a vectorized implementation of the transfer matrix method in Tensor Flow for computing the optical reflection and transmission of multilayer planar stacks. vtmm supports some of the same functionality as the tmm Python package developed by Steven Byrnes. vtmm’s vectorization over both frequency and wavevector leads to approximately an order of magnitude reduction in computation time. Moreover, using Tensor Flow’s automatic differentiation capabilities, gradients of scalar loss / objective functions of the computed transmission and reflection can be taken for “free.”


GitHub Repository

wavetorch

wavetorch

This python package provides recurrent neural network (RNN) modules for pytorch that compute time-domain solutions to the scalar wave equation. This library is the basis for our analog machine learning paper [1].

  1. Wave Physics as an Analog Recurrent Neural Network
    Tyler W. Hughes*, Ian A. D. Williamson*, Momchil Minkov, Shanhui Fan
    Science Advances, vol. 5, num. 12, pp. eaay6946

    DOI PDF PDF (supporting info)

    Analog machine learning hardware platforms promise to be faster and more energy efficient than their digital counterparts. Wave physics, as found in acoustics and optics, is a natural candidate for building analog processors for time-varying signals. Here, we identify a mapping between the dynamics of wave physics and the computation in recurrent neural networks. This mapping indicates that physical wave systems can be trained to learn complex features in temporal data, using standard training techniques for neural networks. As a demonstration, we show that an inverse-designed inhomogeneous medium can perform vowel classification on raw audio signals as their waveforms scatter and propagate through it, achieving performance comparable to a standard digital implementation of a recurrent neural network. These findings pave the way for a new class of analog machine learning platforms, capable of fast and efficient processing of information in its native domain. Analog machine learning computations are performed passively by propagating light and sound waves through programmed materials. Analog machine learning computations are performed passively by propagating light and sound waves through programmed materials.


GitHub Repository

ceviche

ceviche

This is a package developed primarily by Tyler Hughes based on our code and learnings from angler and fdfdpy. Ceviche is designed to use the flexible automatic differentiation (AD) capabilities of the HIPS autograd package. This design choice carries over into optimizing photonic devices because AD simplifies the process of constructing objective / loss functions and taking gradients of several simulations simultaneously. Having a fully differentiable optical simulation framework also facilitates the integration of inverse design and machine learning models.

This code is the basis for the results presented in our forward mode differentiation paper [1].

  1. Forward-Mode Differentiation of Maxwell’s Equations
    Tyler W Hughes, Ian A. D. Williamson, Momchil Minkov, Shanhui Fan
    ACS Photonics, vol. 6, num. 11, pp. 3010–3016

    DOI PDF PDF (supporting info)

    We present a previously unexplored ’forward-mode’ differentiation method for Maxwell’s equations, with applications in the field of sensitivity analysis. This approach yields exact gradients and is similar to the popular adjoint variable method, but provides a significant improvement in both memory and speed scaling for problems involving several output parameters, as we analyze in the context of finite-difference time-domain (FDTD) simulations. Furthermore, it provides an exact alternative to numerical derivative methods, based on finite-difference approximations. To demonstrate the usefulness of the method, we perform sensitivity analysis of two problems. First we compute how the spatial near-field intensity distribution of a scatterer changes with respect to its dielectric constant. Then, we compute how the spectral power and coupling efficiency of a surface grating coupler changes with respect to its fill factor.


GitHub Repository Notebooks Slides

neurophox

neurophox

Simulation of optical neural networks (ONNs) based on on-chip interferometer meshes and electro-optic nonlinear activation functions; based on Tensor Flow and Python. This framework was used for the simulations in our ONN activation function paper [1] and our parallel nullificaiton paper [2].

  1. Reprogrammable Electro-Optic Nonlinear Activation Functions for Optical Neural Networks
    Ian A. D. Williamson, Tyler W. Hughes, Momchil Minkov, Ben Bartlett, Sunil Pai, Shanhui Fan
    IEEE Journal of Selected Topics in Quantum Electronics, vol. 26, num. 1, pp. 1–12

    DOI PDF

    We introduce an electro-optic hardware platform for nonlinear activation functions in optical neural networks. The optical-to-optical nonlinearity operates by converting a small portion of the input optical signal into an analog electric signal, which is used to intensity -modulate the original optical signal with no reduction in processing speed. Our scheme allows for complete nonlinear on–off contrast in transmission at relatively low optical power thresholds and eliminates the requirement of having additional optical sources between each of the layers of the network Moreover, the activation function is reconfigurable via electrical bias, allowing it to be programmed or trained to synthesize a variety of nonlinear responses. Using numerical simulations, we demonstrate that this activation function significantly improves the expressiveness of optical neural networks, allowing them to perform well on two benchmark machine learning tasks: learning a multi-input exclusive-OR (XOR) logic function and classification of images of handwritten numbers from the MNIST dataset. The addition of the nonlinear activation function improves test accuracy on the MNIST task from 85% to 94%.

  2. Parallel Fault-Tolerant Programming of an Arbitrary Feedforward Photonic Network
    Sunil Pai, Ian A. D. Williamson, Tyler W. Hughes, Momchil Minkov, Olav Solgaard, Shanhui Fan, David A. B. Miller
    arXiv:1909.06179 [physics]

    arXiv PDF

    Reconfigurable photonic mesh networks of tunable beamsplitter nodes can linearly transform \N\-dimensional vectors representing input modal amplitudes of light for applications such as energy-efficient machine learning hardware, quantum information processing, and mode demultiplexing. Such photonic meshes are typically programmed and/or calibrated by tuning or characterizing each beam splitter one-by-one, which can be time-consuming and can limit scaling to larger meshes. Here we introduce a graph-topological approach that defines the general class of feedforward networks commonly used in such applications and identifies columns of non-interacting nodes that can be adjusted simultaneously. By virtue of this approach, we can calculate the necessary input vectors to program entire columns of nodes in parallel by simultaneously nullifying the power in one output of each node via optoelectronic feedback onto adjustable phase shifters or couplers. This parallel nullification approach is fault-tolerant to fabrication errors, requiring no prior knowledge or calibration of the node parameters, and can reduce the programming time by a factor of order \N to being proportional to the optical depth (or number of node columns in the device). As a demonstration, we simulate our programming protocol on a feedforward optical neural network model trained to classify handwritten digit images from the MNIST dataset with up to 98% validation accuracy.


GitHub Repository Notebooks

FDFD.jl

FDFD.jl

This is a pure Julia package for solving Maxwell’s equations with the finite difference frequency domain (FDFD) method, with support for dynamic modulation and eigenmode analysis.


GitHub Repository

angler

angler

Python-based library for simulating and optimizing linear and nonlinear optical devices, as demonstrated in our paper [1]. The underlying algorithms are the finite difference frequency domain (FDFD) method and adjoint variable method (AVM). The underlying FDFD code is based on fdfdpy.

  1. Adjoint Method and Inverse Design for Nonlinear Nanophotonic Devices
    Tyler W. Hughes*, Momchil Minkov*, Ian A. D. Williamson, Shanhui Fan
    ACS Photonics, vol. 5, num. 12, pp. 4781–4787

    DOI PDF PDF (supporting info)

    The development of inverse design, where computational optimization techniques are used to design devices based on certain specifications, has led to the discovery of many compact, nonintuitive structures with superior performance. Among various methods, large-scale, gradient-based optimization techniques have been one of the most important ways to design a structure containing a vast number of degrees of freedom. These techniques are made possible by the adjoint method, in which the gradient of an objective function with respect to all design degrees of freedom can be computed using only two full-field simulations. However, this approach has so far mostly been applied to linear photonic devices. Here, we present an extension of this method to modeling nonlinear devices in the frequency domain, with the nonlinear response directly included in the gradient computation. As illustrations, we use the method to devise compact photonic switches in a Kerr nonlinear material, in which low-power and high-power pulses are routed in different directions. Our technique may lead to the development of novel compact nonlinear photonic devices.


GitHub Repository

Copyright Ian Williamson.