00009
TITLE
Category: neural networks

Category: neural networks

What is a liquid neural network, really?

The initial research papers date back to 2018, but for most, the notion of liquid networks (or liquid neural networks) is a new one. It was “Liquid Time-constant Networks,” published at the tail end of 2020, that put the work on other researchers’ radar. In the intervening time, the paper’s authors have presented the work to a wider audience through a series of lectures. Ramin Hasani’s TEDx talk at MIT is one of the best examples. Hasani is the Principal AI and Machine Learning Scientist at the Vanguard Group and a Research Affiliate at CSAIL MIT, and served as the

Read More »

A jargon-free explanation of how AI large language models work

Enlarge (credit: Aurich Lawson / Ars Technica.) When ChatGPT was introduced last fall, it sent shockwaves through the technology industry and the larger world. Machine learning researchers had been experimenting with large language models (LLMs) for a few years by that point, but the general public had not been paying close attention and didn’t realize how powerful they had become. Today, almost everyone has heard about LLMs, and tens of millions of people have tried them out. But not very many people understand how they work. If you know anything about this subject, you’ve probably heard that LLMs are trained

Read More »