Here's a silly idea I had when ChatGPT came out (remember when that was?). What if we could generate new ideas by transferring patterns from one knowledge domain to another? ChatGPT made many believe there's "truth" in text. Can we extract new knowledge from LLMs? I'm skeptical. Without agency, planning, and real-world feedback, it seems tough. But language has more patterns than we thought. Text bots have blown our minds. Maybe we can use humanity's compressed textual knowledge to find and fill gaps. We often use metaphors at work, taking properties from one thing to another. Sometimes it works! Knowledge transfer seems more doable in text than generating verified new knowledge. Not all human knowledge is equally developed. Some knowledge towers are advanced, others lag. Breakthroughs can shift focus, leaving fields behind. Sometimes we're close when a field is forgotten for decades before revival. I mean the 20th and 21st centuries. In ancient times, things could be forgotten for centuries or millennia. We could ask an LLM-dreamer to find knowledge gaps fillable from neighboring domains. "Apply fluid dynamics to neural nets." Sure, we might get interesting nonsense. Then an LLM-critic could filter obvious junk. We could have LLMs dialogue on a topic. Various bot cascades could act as agents with different views. Goal: a top 100 list of attractive ideas. Again, no real-world feedback. Unless it's an automated chem lab seeking new materials. A site ranking imagined ideas to fill knowledge gaps would be cool. With a paper archive and tuning, could be very interesting.