Featured Science

Artificial Intelligence: The Neural Network Approach

The whole world is going gaga over Artificial Intelligence (AI). There are discussions about the impact it will have on our lives in the years to come, its merits and demerits, how it will affect creative activity or lead to job losses and much more. But how many of us are aware of its origins?

AI’s roots trace back to the ideas of Alan Turing, the father of modern computers. Turing conceptualized a hypothetical device, now called the “Turing Machine,” which could carry out logical operations and solve problems. This concept laid the foundation for what we call machine intelligence. Interestingly, some problems that the Turing machine couldn't solve back then still challenge even today’s most advanced computers—except perhaps for quantum computers, which are still in development!

At its heart, AI seeks to replicate how the human brain learns, stores, and retrieves information. Early efforts in this direction began in the 1940s but faced limitations due to the incomplete understanding of brain functions at the time. A breakthrough came in 1949 when Canadian psychologist Donald Hebb proposed the idea of synaptic plasticity. He showed that learning permanently alters the effectiveness of synapses—the connections between brain cells (neurons).

The real game-changer came in 1982 when John Hopfield, a physicist-turned-computer scientist, developed a model for neural networks. Hopfield’s work bridged biology, physics, and computer science, setting the stage for rapid advancements in AI.

Hopfield’s model mimicked the brain’s memory mechanisms, drawing inspiration from an unlikely source—spin-glass systems in physics. Spin-glass refers to a special type of magnetic material where atomic spins (tiny magnets) are randomly arranged, causing unique interactions. These materials can stabilize in a huge number of random configurations, similar to how the brain can store multiple memories.

Hebb’s learning rule, inspired by synaptic plasticity, was mathematically integrated into Hopfield’s model. This allowed memories to be distributed across neural networks, making them robust and enabling selective recall based on associations.

Notwithstanding the success of the Hopfield-model with Hebb’s rule incorporated and conjoined with the prescription for recall, there is still tremendous scope for improvement. Efforts are constantly on to improve this model or construct new ones, to mimic cognition as closely as possible. These include some efforts by this author’s group together with his Cambridge collaborators namely an eminent physicist from Cavendish, Sir Sam Edwards and a neuroscientist David Parker. Thus, the origins of AI are intricately linked to spin-glass physics, and Sir Sam can be regarded as one of the pioneers of Artificial Intelligence.

Prof. Vipin Srivastava
Former Professor of Physics, University of Hyderabad