Is Hashing All You Need For Implicit Neural Representations?



Ever since the original NeRF paper came out in 2020, implicit neural representations (INR) became one of the most active research areas in the field of deep learning in the past few years. It could be argued that if LLMs were the golden child of ML among popular science, then INRs would be the golden twin within academia, especially among computer vision researchers. While it began as a methodology for 3D reconstruction from 2D images, it has recently evolved into a new paradigm for representing all modes of data in a continuous manner.

In 2021, Instant-NGP took over the field of INRs like a storm. It achieved orders of magnitude better performance on both training and inference quality and speed. A majority of this superior performance could be credited to the invention of multi-resolution hash embeddings. However, why this method is so powerful in INRs tasks is yet to be explored. Questions such as: What exactly is the hashing doing that is making the learning so quick and accurate? Why is a 2-layer ReLU network sufficient for learning such complex data? (e.g. 2D images, sign distance functions, 3D volumes, etc.) are crucial explorations to be made for the field to progress in the right direction.

With some preliminary experiments and discussions with researchers familiar with the field, it could be reasonably conjectured that hashing is a fundamentally superior method in the study of INRs. This project aims to prove this conjecture in a rigorous manner, with a focus on the applications in 2D images.

We will be optimistically aiming for ICML 2024 and NeurIPS 2024. Unofficially supervised by Prof. N. Wong from HKU and Prof. David Lindell from UofT.

We’re looking for 2-3 experienced researchers:

  • Fluent in Python and its relevant frameworks or packages such as Pytorch, Numpy, Mat-plotlib
  • Have a fundamental understanding of neural networks.
  • Have the ability to make informed hypothesis and design experiments to validate the ideas
  • Have a genuine interest in understanding the inner workings of a neural network, either empirically through experiments or theoretically through mathematics.
  • Having an understanding of the theoretical side of neural networks such as the spectral basis theory and NTK theory is preferred
  • Having prior experience in leading or co-leading academic research is preferred
  • Having prior experience in NeRF or INR-related field is preferred

The Team

Steven Luo
Tanmay Bishnoi
Freeman Cheng
Alston Lo
Andrew Magnuson
Howard Xiao
Junrui (Arielle) Zhang