Democratizing visual communication is arguably useful, for instance helping people to create diagrams that illustrate a concept they wish to convey. This is contingent on the tech working sufficiently well that the visuals are more effective at communication than the text that went into producing them though.
It's always felt like way overhyping to call something "democratization" when it's something I could do as a middle schooler in 2005. It takes some skill to do very well but it's not like basic diagram creation isn't something people already could do for basically free (I create figures for my job all the time now and chatGPT is more expensive than tools I use for design).
Commissioning high quality diagrams from a designer is expensive and I guess it's much cheaper now to essentially commission something but idk, "democratization" still feels weird for just undercutting humans on price.
You are making a mistake a lot of people make when talking about genAI helping others do work. I get that to you it is very easy to do, but there are other groups of people that are not able to do it. What you are saying is like a hobbyist carpenter saying that making a bedside table would take him one weekend to do, so he doesn't think it is okay for tables to be made via assembly line instead of hiring a carpenter to do it.
I think you're missing my point, which is pretty narrow here. "Democratization" is fairly grand term implying that the general public now have access to something freeing they didn't before (it generally invokes some idea of liberation, as the term often is used to note a transition from an authoritarian to a democratic government). I don't think there has ever been a particularly high barrier to making good diagrams, in my experience it's an easy to learn skill both in time and money, so it feels like it's cheapening the term "democratization". Maybe I'm being a bit sensitive though because of how the world is right now with people sometimes literally fighting for democracy. Normally I am pretty lax with semantics but I've had some people really rub me the wrong way when overhyping AI.
And yet coming up with insightful diagrams, even or especially if they are particularly simple, can be a point of fame (c.f. Feynman diagrams). Diagrams often need to "lie" in some sense, so it can actually be quite difficult to find ways to convey the point you want without misleading in some other important way. e.g. I had a geometry professor that would label the x-axis R^n and the y-axis R^m for a bunch of different pictures, which on its face makes no sense, but it conveyed what it needed to.
People tried to prove the parallel postulate redundant for thousands of years because they lacked the right picture to show why it's necessary.
Yeah, it's not "democratization", people were just too lazy to do it before. It only takes some basic effort and a little bit of time to be able to create decent versions of those things.
My workplace does this for EVERYTHING. And they are always immediately obviously AI slop, both because we all know they wouldn't ever pay an actual artist to create graphics, but also because the people creating the graphics have no sense of style and let it generate the most generic shit possible with zero creativity.
It's definitely not helpful. It's just annoying and disgusting and a waste of resources IMO. But hey at least Powerpoint presentations have AI slop instead of stuff taken from Google Images!?
The point of a diagram is that you have something in your head to turn into the diagram. There's no point if you can't do it yourself and the image generator is coming up with it for you.
I disagree. Diagrams are a type of visual communication, and not everyone is good at translating things to visual. I open an excalidraw with clear concepts in my head, but nothing comes out of it. I try C4 or flow diagrams, and I spend an excessive amount of time refactoring them to end up mediocre anyway. Not just me, I know MANY developers that are amazing at explaining things but are mind-blocked when drawing simple circles and arrows.
Helping us navigate things we aren't good at has been one of the main selling points of AI.
It's not translation if it's completely AI generated to begin with. Instead of addressing your mental deficits (which sound severe), you're offloading it and making the problem worse.