News in English

AMD predicts future AI PCs will run 30B parameter models at 100 tokens per second

They're gonna need a heck of a lot of memory bandwidth – not to mention capacity – to do it

Analysis  Within a few years, AMD expects to have notebook chips capable of running 30 billion parameter large language models locally at a speedy 100 tokens per second.…

Читайте на 123ru.net