Sarvam unveils two new large language models focused on real-time use, advanced reasoning
The company said the model is optimised for “efficient thinking”, delivering stronger responses while using fewer tokens — a key factor in reducing inference costs in production environments.