Soft Power Squared: GPT and the explosive potential of A.I. parallelism

admin Tech Note

The lightning speed emergence and evolution of OpenAI’s GPT platform, and of large language models (LLMs) in general, requires a reassessment of the innovation landscape and the economy itself. Exciting developments by giant tech companies and tinkering novices land not month by month but minute by minute. 

Each new era of digital advance marries a new abundance of hardware resources with an explosion of software creativity. Today, the Transformer approach to A.I., introduced in 2017, leverages the new silicon abundance – massive parallelism, embodied in graphics processors, or GPUs, themselves arrayed in massively parallel fashion in data centers. The result is a new knowledge platform, soon to unleash multidimensional cascades of software, content, and digital capability which will diffuse into every tool and industry. 

In 2016, DeepMind’s AlphaGo blew our minds by winning, with panache, an ancient board game far more complicated than chess. AlphaGo signaled a new era in A.I. We noted at the time, however, that it was still playing on a field constrained in multiple dimensions. It also consumed 50,000 times more powerthan the human brain. An impressive feat, yes, but voracious and narrow. 

ChatGPT’s emergence this winter, however, captured the world’s attention because of its seeming ability to deal with a wider range of tasks across a much broader, more ambiguous, more human field of play. Continue reading . . .