Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What's a good use case for a defense contractor to generate AI images besides to include in presentations?


Fabricating evidence of weapons of mass destruction in some developing nation.

I kid, more real world use cases would be for concept images for a new product or marketing campaigns.


...you can do that with a pencil, though.

What an impossibly weird thing to "need" an LLM for.


You can also create images by poking bits in a hex editor. Some tools are better suited than others.


I suppose you walk by foot everywhere?


Sometimes. My feet don't have a random chance to send me in a random direction to that which I intend.


This is why I don't ride horses.


Think of all the trivial ways an image generator could be used in business, and there is likely a similar use-case among the DoD and its contractors (e.g. create a cartoon image of a ship for a naval training aid; make a data dashboard wireframe concept for a decision aid).


Input one image of a known military installation and one civilian building. Prompt to generate a similar _civilian_ building, but resembling that military installation in some way: similar structure, similar colors, similar lighting.

Then include this image in a dataset of another net with marker "civilian". Train that new neural net better so that it does lower false positive rate when asked "is this target military".


You'll never get promoted thinking like that! Mark them all "military", munitions sales will soar!


You might not believe it but the US military actually places a premium on not committing war crimes. Every service member, or at least every airman in the Air Force (I can't speak for other branches) receives mandatory training on the Kunduz hospital before deployment in an effort to prevent another similar tragedy. If they didn't care, they wouldn't waste thousands of man-hours on it.


I know they do. They have their proxies who can get hands dirty when that's needed. Every major geopolitical military player is the same


I knew a guy whose job was to assess and approve the legality of each strike considering second order impacts on the community


> On 7 October 2015, President Barack Obama issued an apology and announced the United States would be making condolence payments of $6,000 to the families of those killed in the airstrike.

Definitely a premium.


Most importantly they finance propaganda films like "eye in the sky" to make it look like they give a shit about not killing civilians.

Videos on wikileaks tell a different story.


Bombs and other kinds of weapon system which are "smarter" have higher markup. It's profitable to sell smarter weapons. Dumb weapons is destroying the whole cities, like Russia did in Ukraine. Smart weapons is striking a tank, a car, an apartment, a bunker, knowing who's there and when — which obviously means less % of civilian casualties.


Remember when Obama re-defined so that "all adult males are terrorists"? That's how USA reduces civilian casualties.


The very simple use case is generating mock targets. In movies they make it seem like they use mannequin style targets or traditional concentric circles but those are infeasible and unrealistic respectively. There's an entire modeling industry here and being able to replace that with infinitely diverse AI-generated targets is valuable!


Generating 30,000 unique images of artillery pieces hiding in underbrush to train autonomous drone cameras.


I don't really understand the logic here. All the actual signal about what artillery in bushes look like is already in the original training data. Synthetic data cannot conjure empirical evidence into existence, it's as likely to produce false images as real ones. Assuming the military has more privileged access to combat footage than a multi-purpose public chatbot I'd expect synthetic data to degrade the accuracy of a drone.


Generative models can combine different concepts from the training data. For example, the training data might contain a single image of a new missile launcher at a military parade. The model can then generate an image of that missile launcher hiding in a bush, because it has internalized the general concept of things hiding in bushes, so it can apply it to new objects it has never seen hiding in bushes.


I'm not arguing this is the purpose here but data augmentation has been done for ages. It just kind of sucks a lot of the time.

You take your images and crop, shift, etc them so that your model doesn't learn "all x are in the middle of the image". For text you might auto replace days of the week with others, there's a lot of work there.

Broadly the intent is to keep the key information and generate realistic but irrelevant noise so that you train a model that correctly ignores the noise.

You don't want to train your model identifying some class of ship to base it on how choppy the water is, just because that was the simple signal that correlated well. There was a case of radiology results that detected cancer well but actually was detecting rulers in the image because in images with tumors there was often a ruler so the tumor could be sized. (I think it was cancer, broad point applies if it was something else).


If you're building a system to detect something, usually you need enough variations. You add noise to the images, etc.

With this, you could create a dataset that will by definition have that. You should still corroborate the data, but it's a step ahead without having to take 1000 photos and adding enough noise and variations to get to 30k.


What you're saying just isn't true.

I can get an AI to generate an image of a bear wearing a sombrero. There are no images of this in its training data, but there are bears, and there are images of sombreros, and other things wearing sombreros. It can combine the distributions in a plausible way.

If I am trying to train a small model to fit into the optical sensor of a warhead to target bears wearing sombreros, this synthetic training set would be very useful.

Same thing with artillery in bushes. Or artillery in different lighting conditions. This stuff is useful to saturate the input space with synthetic examples.


Unreal, Houdini and a bunch of assets do this just fine and provide actually usable depth / infrared / weather / fog / TOD / and other relevant data for training - likely cheaper than using their API

See bifrost.ai and their fun videos of training naval drones to avoid whales in an ethical manners


It's probably not that, but who knows.

The real answer is probably way, way more mundane - generating images for marketing, etc.


Never underestimate the military PowerPoint[1] industry!

[1] https://media.wired.com/photos/5933e578714b881cb296c6ef/mast...


well considering an element of their access is the lifting of safety guardrails, I'd assume the scope includes, to some degree, the processing or generation of nsfw/questionable content


The guardrails in question are around generating images of weapons, military installations, etc. Not run-of-the-mill NSFW stuff.


Perhaps. I still think it's more "we don't need to guard the government from itself" sort of thing.


Interesting. Let's say we have those and also 30k real unique images, my guess is that real ones would have more useful information in them, but is this measurable? And how much more?


See IDF’s Gospel AI - the goal isn’t always accuracy, it’s speed of assigning new bombing targets per hour


If the model can generate the images, can't it already recognize them?


The model they're training to perform detection/identification out in the field would presumably need to be much smaller and run locally without needing to rely on network connectivity. It makes sense, so long as the openai model produces a training/validation set that's comparable to one that their development team would otherwise need to curate by hand.


Manufacturing consent


Reality is turning into some kind of Hideo Kojima game.

https://youtu.be/-gGLvg0n-uY


Wow! What an amazingly dystopian vision of the future. Probably right.


it's a deepfake. it's not actually from the game MGS2. this is the actual video: https://www.youtube.com/watch?v=C31XYgr8gp0


Makes perfect sense.


Wow that video is awesome, thanks for sharing


Literally how it will be used; you are correct.


Generating or augmenting data to train computer vision algorithms. I think a lot of defense problems have messy or low data


Generating pictures of "bad guy looking guys" so your automated bombs shoot more so you sell more bombs


AI image generation is a "statistical simulator". And when fed with the right information, it can generates pretty close to reality scenery.


Vastly oversimplified but for every civilian job there's an equivalent military job. Superficially, the military is basically a country-sized self-contained corporation. Anywhere that Wal-Mart's corporate office could use AI so could the military.


Training, recruiting, sales (as you mention), testing image based targeting.


military famously bad at powerpoint meme




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: