Reminder that Google Maps only exists because of one 100-1000x engineer who rewrote all the code in one weekend
Reminder that Google Maps only exists because of one 100-1000x engineer who rewrote all the code in one weekend https://t.co/kFKGU0y8qr
Not familiar with the ajax/XML heyday of the past, can anyone explain why XML contributed so much bloat?
Bret responded!! Every eng should aspire to be a Bret
Bret responded!! Every eng should aspire to be a Bret
@zack_overflow At one point Google Maps had a fully functional XSL transform engine written in JavaScript to translate XML from the server to the content you saw on the screen. It was insane.
@zack_overflow In the early days, the ajax calls used to retrieve html fragments from the backend and modify the dom. I think that style might have led the developer to double down on using XML (just a guess), instead of building a better data model.
It was the Java of the data world. You needed a schema and to understand it and you could put all kinds of crap into the schema, it BEST CORRECTED everything while failing to realise that humans often edited the files. It was the worst of all worlds, perfect for machines under the right conditions but completely terrible for humans under average conditions. Because it was so verbose you couldn't eyeball it, and because it was so tightly defined it led to weeks of bikeshedding design of a data model before it could be used. It's a shit markup language that is as rigid as binary but as slow as text, masqueraded as a human-readable format. It worked well when combined with the discipline of overengineering, which means it didn't work well at all.
Usually in the context of * xml used to express programmatic things (if/else logic clauses, huge ridiculous things) * the governance of schema became a monster and XML developed so many standards that created broken ways to integrate * “describe life with XML” everything was described using XML, and so you would get huge verbose clunky and bizarre protocols * XML meta languages as in create something out of XML, using XML (eg xslt) requires you to do some mental gymnastics that were ridiculous * XML parsing, semantics libraries therefore were really bad, and fell out of date quickly and were buggy because all of that complexity. Something like that 😊
It was pushed by absolutely everyone at the time. I remember a conference in SF around 2002 and everyone jumped on board. Its clear text, wordy structure doesn't scale at all. But having said that, the increase in computing speed has allowed a lot of bloatware to exist (.e.g. java), without remorse. I started in an era where I had 16k of usable memory for my programs, so I get how luxurious and sloppy the dev environments have become.
@zack_overflow to start off with - parsing json is quite a lot quicker than XML. and these days you can use things like grpc which is even faster
@zack_overflow whenever I see XML I assume over engineered overpriced slow crap, it never misses
@zack_overflow Huge overheads to parsing.
@zack_overflow dude when AJAX came out it blew everyones minds
@zack_overflow JavaScript doesn't have a native xml interpreter. It's expensive to deal with it
@zack_overflow My guess is it’s less about the format and the maturity of the tech. There is an entire enterprise stack around XML and back then JS would have been pretty lean. Probably just doing simple stuff in the former would take more effort.
@zack_overflow Web browsers didn’t have an xml parser built in back then. It sounds like they were trying to do their processing with the xsl doing the transformations rather than using code. One of the ideas of xml was that it could replace a lot of code functionality. The implicit
@zack_overflow Technically human readable but so obscenely verbose that no human wants to read it.