This is such a funny debate. Because of random sampling, EVERYTHING that could ever exist can be generated with non-zero probability. Any output can be reached with the right RNG. To say AI can't invent anything is to say nothing could ever be invented at all.
This is such a funny debate. Because of random sampling, EVERYTHING that could ever exist can be generated with non-zero probability. Any output can be reached with the right RNG. To say AI can't invent anything is to say nothing could ever be invented at all.
Maybe the better debate is, "is AI likely to invent?" ethansmith2000.com/post/to-create…
@torchcompiled some of y’all never unboxed a knife in csgo and it shows
@torchcompiled but to sample from what makes sense it’s limited to our io. and i think the debate should be more clear. how can you create novelty if you are using the same ai (as we know it today) as everyone else
@torchcompiled I'm broadly on your side but don't think this is a great argument. "random.random() can't invent anything" is true for practical purposes.
@torchcompiled Taste > Search > Probabilistic Recombination AI will invent, question is if the algorithm to invent is cheap enough to make the economics work out Taste is important because of time / cost / resource / optimisation, so the real question is will AI get to be GOOD AT Taste?
@torchcompiled you’ve got a point - but the dice aren’t just rolling, they remember the tilt
I'm a strong believer that nothing is 'invented' out of whole cloth. Einstein was looking at a certain question with a certain set of knowledge and in that context the answers were obvious to him. Less of a 'creative' process than asking yourself the right questions at the right time. There's no reason that an LLM when presented with the right question and the right source data, couldn't 'randomly' come up with a novel invention.
@torchcompiled ai cannot draw from the subjective conscious experience to create
@torchcompiled This is not technically true because of top-k and top-p sampling
@torchcompiled Is this true? For example, when training a diffusion model from scratch with ONLY black and white images, will it be able to generate color?
@torchcompiled Not necessarily. It could be that the current paradigm won't invent new things due to essentially 0 probability if we go full on RNG (typewriting monkeys in practice are useless ). Or otherwise, useful probabilities will constrain it too much into existing knowledge. Who knows..
@torchcompiled yeah- at the same time the combinatorial space of sets of tokens the model can produce is limited by what’s in the training data so for the model to produce something novel you really have to encourage it