Announcing the alpha release of torchtune! torchtune is a PyTorch-native library for fine-tuning LLMs. It combines hackable memory-efficient fine-tuning recipes with integrations into your favorite tools. Get started fine-tuning today! Details: hubs.la/Q02t214F0
@PyTorch Congrats PyTorch team on the launch but I think we could already finetune LLMs pretty easily with libraries like LitGPT, and Axolotl. Genuinely asking what makes TorchTune different from the existing libraries?
@PyTorch Super grateful to have been part of such an incredible project 👏👏👏
@PyTorch PyTorch-native LLM fine-tuning with torchtune is a step forward. While memory efficiency is great, integration details & long-term support are key for wider adoption.
@PyTorch are there plans to add more adapter types, like IA3, LoHa and LoKr?
@PyTorch Amazing, nothing could be better then having a builtin library with no third party dependency.
@PyTorch Already loving the docs and that they are also educational. Thanks!
@PyTorch @DataSciNews Exciting news on the alpha release of torchtune! Can't wait to fine-tune LLMs with hackable memory-efficient recipes, integrating seamlessly into favorite tools. Let's dive in!