We just released Pixtral 12B paper on Arxiv: arxiv.org/abs/2410.07073
@dchaplot Congrats! Pixtral 12B is also integrated into @CamelAIOrg for multi-agent systems: github.com/camel-ai/camel… 🚀
@dchaplot Have you seen a good recipe for running Pixtral on macOS yet?
@dchaplot flexible token processing is key. excited to see how this scales
@dchaplot Really appreciate the acknowledgement! It was a great collaboration to have this model supported on vLLM on day 1!
@dchaplot @LeeLeepenkman Love the Word Art 1997 typography! Nostalgic!!
@dchaplot Super exciting stuff. Multi-agent systems unlock so many possibilities.
@dchaplot Dark mode for this paper for those who read at night 🌚 synthical.com/abs/2410.07073…
@dchaplot @yacineMTB I have been playing with the model last weekend and it’s awesome. 👌
@dchaplot AI Summary: Pixtral 12B is a 12-billion-parameter multimodal language model designed to process both natural images and text, achieving superior performance on various benchmarks compared to larger models. I... goatstack.ai/articles/2410.…