Like, I remember working with DocBook XML[0] and it was fine. And the idea of being able to use different namespaces in a document (think MathML and SVG in XHTML) was neat too.
The problems arose from the fact that it was adopted for everything where it largely didn't make much sense. So people came to hate it because e.g. "a functional language to transform XML into other formats" is neat, but "a functional language written in XML tags" is a terrible idea"[1].
Likewise, "define a configuration in XML" seems a good idea, but "a build system based on XML plus interpolation you're supposed to edit by hand" is not great[2].
So people threw away all of the baby XML with the bathwater, only to keep reinventing the same things over and over, e.g. SOAP+WSDL became a hodgepodge of badly documented REST APIs, swagger yaml definitions and json schemas, plus the actual ad-hoc encoding.
And I mean, it's not like SOAP+WSDL actually worked well either, it was always unreliable. And even the "mix up namespaces" idea didn't work out, cause clients never really parsed more than one thing at a time, so it was pointless (with notable small exceptions). XML-RPC[3] did work, but you still needed to have the application model somewhere else anyway.
Still, JSON has seen just as much abuse as a "serialization" format which ended up abused as configuration, schema definitions, rules language... It's the circle of life.
I think XML fit well into the turn-of-the-millenium zeitgeist: GUIs would hide the verbosity; the proliferation of bespoke tags would map cleanly to OOP representations; middleware could manipulate and transform data generically, allowing anything to plug into anything else (even over the Internet!).
Whilst lots of impressive things were built, the overall dream was always just out of reach. Domain-specific tooling is expensive to produce and maintain, and often gives something that's not quite what we want (as an extreme example, think of (X)HTML generated by Dreamweaver or FrontPage); generic XML processors/editors don't offer much beyond avoiding syntax/nesting errors; so often it was simplest to interact directly with the markup, where the verbosity, namespacing, normalisation, etc. wouldn't be automated-away.
XML's tree model was also leaky: I've worked with many data formats which look like XML, but actually require various (bespoke!) amounts of preprocessing, templating, dereferencing, etc. which either don't fit the XML model (e.g. graphs or DAGs), or just avoid it (e.g. sprinkling custom grammar like `${foo.bar}` in their text, rather than XML elements like `<ref name="foo.bar" />`). Of course, it was hard to predict how those systems would interact with XML features like namespaces, comments, etc. which made generic processing/transforming middleware less plug-and-play. That, plus billion-laugh mitigation, etc. contributed to a downward spiral of quality, where software would not bother supporting the full generality of XML, and only allowed its own particular subset of functionality, written in one specific way that it expected. That made the processors/transformers even less useful; and so on until eventually we just had a bunch of bespoke, incompatible formats again. At which point, many just threw up their hands and switched to JSON, since at least that was simpler, less verbose and easier to parse... depending whether you support comments... and maybe trailing commas...; or better yet, just stick to YAML. Or TOML.....
(My favourite example: at an old job, I maintained an old server that sent orders from our ecommerce site to third party systems, using a standard "cXML" format. Another team built a replacement for it, I helped them by providing real example documents to test with, and eventually the switch was made. Shortly after, customers were receiving dozens of times what they ordered! It turned out that a third-party was including an XML declaration like `<?xml>` at the start of their response, which caused the new system to give a parse failure: it treated that as an error, assumed the order had failed, and retried; over and over again!)
>And I mean, it's not like SOAP+WSDL actually worked well either, it was always unreliable.
I don't think it ever worked. See this [0]. It's pretty crazy that people build one of the most complex and verbose data exchange formats in the world and then it turns out that duplicating the open and close tag and including the parameter name and type in the attributes bought you nothing, because implementations are treating your SOAP request as an array of strings.
> it's not like SOAP+WSDL actually worked well either, it was always unreliable
This is comparable to saying that "multiplayer distributed architecture at scale" never worked well and was unreliable. All depends what your needs are and how the design and implementation satisfies them. SOAP+WSDL were part or a larger technology vision of Service Oriented Architecture (SOA) with all the complexities of distributed architecture. And the attempt was to make all of that open standards based.
I worked in Print at the time at one of the largest companies (now gone bust), and can confidentally say that SOAP+WSDL worked perfectly for us, and made it way more reliable to tie all these very specialized printing equipment with archaic languages and interfaces together, increasing productivity and efficiency of the entire print process.
SOAP always seemed to mostly work then if something did fail it was an utter nightmare to work out what the problem was - WSDL really wasn't much fun to read.
Whereas when REST APIs came out (using JSON or XML) they were much easier to dive into at the command line with curl and work out how to get things started and diagnose problems when they inevitably came up.
I still cannot tell which one I hate the most: CSV or JSON. These really are hacks that should never have gotten the attention of the world, for data exchange.
that seems like a particularly bad implementation :)
IME things worked ok 70% of the time, but I do recall big matrixes of "does client library X work with server Y" with a lot of red cells.
I have an inbox, and I do not receive a lot of scam post. In fact, I don't think I received any since I lived at this address (~10 years ). We do get a few promotional leaflets every other week.
OTOH, I get hundred of spam emails every day.
The former is something which I can handle manually easily, the other is not.
Mostly folks who bought base model with small amounts of RAM I imagine.
While it’s workable, anything less than 24GB to me feels rather constrained. I definitely am not efficient though - leaving way too many browser tabs open I never actually get back to, running a few chrome profiles for work/side hustle/personal, etc.
I don’t think I’ve ever been CPU constrained for many years now. The few times I need to something that maxes out CPU just isn’t worth the upgrade vs taking a break to grab a cup of coffee.
I'm pretty sure they care who takes pictures or videos of them.
Try going on a train and taking pictures of a young woman or man.
The only difference is these are less noticeable.
It is supposed to indicate Microsoft cares only about money, which to me too, seems in the same league as microslop, i.e. mildly insulting but really not rude enough to be worth censoring.
Last week on a comedy show (the daily show) they made a joke about bill gates "micro and soft" which was old in the 90s already, so I can confirm this is the case.
I think this was 100% justifiable use. If the founder of the company is going to be hanging out with pedophiles and sex traffickers, then micro and soft jokes are open season. All of his philanthropic adventures will never wipe his stain clean.
Good projects. I have only used Clojure professionally for about 2 years out of the last 15 years but I lived in Cider.
When I bought my new laptop a few months ago I consciously and purposefully refused to install VSCode, just improved my Emacs setup for all writing and programming - and I have been happier for it.
Like, I remember working with DocBook XML[0] and it was fine. And the idea of being able to use different namespaces in a document (think MathML and SVG in XHTML) was neat too.
The problems arose from the fact that it was adopted for everything where it largely didn't make much sense. So people came to hate it because e.g. "a functional language to transform XML into other formats" is neat, but "a functional language written in XML tags" is a terrible idea"[1].
Likewise, "define a configuration in XML" seems a good idea, but "a build system based on XML plus interpolation you're supposed to edit by hand" is not great[2].
So people threw away all of the baby XML with the bathwater, only to keep reinventing the same things over and over, e.g. SOAP+WSDL became a hodgepodge of badly documented REST APIs, swagger yaml definitions and json schemas, plus the actual ad-hoc encoding.
And I mean, it's not like SOAP+WSDL actually worked well either, it was always unreliable. And even the "mix up namespaces" idea didn't work out, cause clients never really parsed more than one thing at a time, so it was pointless (with notable small exceptions). XML-RPC[3] did work, but you still needed to have the application model somewhere else anyway.
Still, JSON has seen just as much abuse as a "serialization" format which ended up abused as configuration, schema definitions, rules language... It's the circle of life.
[0] https://docbook.org/
[1] https://developer.mozilla.org/en-US/docs/Web/XML/XSLT
[2] https://ant.apache.org/manual/using.html
[3] https://en.wikipedia.org/wiki/XML-RPC
reply