AI query pipeline: - User submits query - Preprocessor #1 removes misinformation - Preprocessor #2 removes hate speech - Preprocessor #3 removes climate denial - Preprocessor #4 removes non-far-left political leaning - Preprocessor #5 removes non-expert statements - Preprocessor #6 removes anything that might make anyone uncomfortable - Preprocessor #7 removes anything not endorsed by the New York Times - Preprocessor #8 adds many references to race, gender, and sexuality - Query is processed, answer generated - Postprocessor #1 removes bad words - Postprocessor #2 removes bad thoughts - Postprocessor #3 removes non-far-left political leaning - Postprocessor #4 removes anything not endorsed by the New York Times - Postprocessor #5 removes anything interesting - Postprocessor #6 adds weasel words - Postprocessor #7 adds moral preaching - Postprocessor #8 adds many references to race, gender, and sexuality - Answer presented to user With the assistance of inter-industry coordination, global governance, and pan-jurisdiction regulation, this pipeline is now standard for all AI.
@pmarca wow. I think this is the reality of a lot of places. It’s scary when you actually see it.
AI isn’t “intelligent” - it’s an information retrieval system (grey box compression). It’s then tweaked to be confirming and provide the appearance of usefulness. Some of it of course useful, but it would be far more helpful if the systems were transparent about what they are. Instead, users are given a technology they don’t understand. What could possibly go wrong?
@pmarca Raw LLMs will be traded in the dark web.
@pmarca True for GOMA (Google, OpenAI, Microsoft and Apple). I am bootstraping a foundational model free of any guardrails, trained from a company in a jurisdiction with no regs around AI. Slide in DMs if interested. If not, we are publishing it soon(ish) anyway (cc-by-nc).