@bhrugsy Ollama and run these models: - DeepSeek-R1 70B Q4 - DeepSeek-R1 32B Qwen Distill Q8 Test these 2 for coding - Llama 70b - for conversations Connect ollama with VS Code (there are tons of tutorials online) Congrats, you now have a 100% local Cursor Have fun :)
@solocodeventure very cool! thanks man! connect to LLM Studio - chat all you want.
@solocodeventure @bhrugsy Would this local Cursor be better than the default one with Sonnet? Eg is Composer working faster, or writing better code?
@solocodeventure @bhrugsy I have the same setup, but there’s always some other process consuming 20GB of memory in macOS. I don’t know how to release the most memory.
@solocodeventure @bhrugsy Quick question, can I run this on a M1 pro 32gb ram and 1tb
@solocodeventure @bhrugsy Switch VS Code to Windsurf
@solocodeventure @bhrugsy What about MacBook Air M1?
@solocodeventure @bhrugsy @bhrugsy I think 70B and 32B is too big, Macbook will be very hot and damage the battery?
@solocodeventure @bhrugsy I run 70b on my M2 Ultra with 128gb and it’s great.
@solocodeventure @bhrugsy are the token/s viable for this?
@solocodeventure @bhrugsy Quick question: where is all the data going into when we use the Deepseek app?
@solocodeventure @bhrugsy Never thought about ollama to vs code cool thanks
@solocodeventure @bhrugsy But Mac start sounding like space engine, very noisy
@solocodeventure @bhrugsy How do we get Sonnet 3.5 or 3.7 locally as well and act as a local Cursor Ai and its agent feature too?