Hacker Newsnew | past | comments | ask | show | jobs | submit | numpad0's commentslogin

Worse yet, if you're not an expert(with autodidacts potentially qualifying), your ideas won't be original anyway.

You'll be inventing a lot of novel cicular apparatus with a pivot and circumferencrial rubber absorbers for transportation and it'll take people serious efforts to convince you it's just a wheel.


In most domains, working on a project for a few years will make you an expert.

Working? Maybe. Prompting? Unlikely.

And in some other domains it takes a few decades to get to the top technically, not just a few years.

GPUs before crypto had a lot less amount of VRAM. Crypto investment funded a lot of stupid experiments, of which some did stick to the wall. I don't think gamers had lives completely ruined by crypto in the end.

Crypto didn't need vram did it? It was just about hash rate no?

Besides, a 1080 had 8GB, a 5080 has 16GB. Double in 10 years isn't ground breaking. The industry put VRAM into industrial chips. It didn't make it to consumer hardware.

What games have had to deal with instead is inference based up-scaling solutions. IE using AI to sharpen a lower rest image in real time. It seems to be the only trick being worked on at the moment.

I can't think of anything useful crypto did.


I think the somewhat hallucinatory canned response is that they distribute data across drives for a massive throughput. Though idk if that even technically makes sense...

Face scan is a total dice roll full of bias, whatever you do. With or without ML.

... why is the hxxps:// URL in the article linkified? It's a URL scheme created to explicitly mark URL as unsafe.


i don't think they do image gen etc? You can almost think of Off Grid like an on-device assistant. It'll handle everything like vision, and attachments too. We're still early stages and lots of optimizations to do, but we'll get there.

This took me a while(I'm slow), but I think GP is saying: "I've seen enough of (expressions) thinking ideas is the key when engineering was; with everyone snorting LLMs, we'll see that replicating in software world" but nicely.

THAT makes sense. Engineering was never cheap nor non-differentiating if normalized by man-hours, only when it was USD normalized. If a large enough number of people were to get the same FALSE impression that software and firmware parts are now basically free and non-differentiating commodities, then there will be tons of spectacular failures in software world in coming years. There has already been early previews of those here.


Frankly, this sounds like some people aren't so comfortable with the sheer cost of the machine than their absolute utility. CT and MRI scan machines are something that said to cost like $1m/yr/unit that's ~$500 uninsured/$100 insured per run in Japan that China don't publish data on numbers or distributions of. That says "military grade expensive" written all over.

The actually issue according to another comment [0] is this[1]:

> Around iOS 17 (Sept. 2023) Apple updated their autocorrect to use a transformer model which should've been awesome and brought it closer to Gboard (Gboard is a privacy terror but honestly, worth it).

> What it actually did/failed to improve is make your phone keyboard:

> Suck at suggesting accurate corrections to misspelled words

> "Correct" misspelled words with an even worse misspelling

> "Correct" your correctly spelled word with an incorrectly spelled word

Which makes me wonder: is Transformer model good with manipulating short texts and texts with errors at all ? It's kind of known that open weight LLMs don't perform well for CJK conversion tasks[2], and I've also been disappointed by their general lack of typo tolerances myself as well. They're BAD for translating ultrashort sentences and singled out words as well[3]. They're great for vibecoding, though.

Which makes me think, are they usable for anything under <100 bytes at all? Does it seem like they have a minimum usable input entropy or something?

0: https://news.ycombinator.com/item?id=47006171

1: https://thismightnotmatter.com/a-little-website-i-made-for-a...

2: The process of yielding "㍑" from "rittoru"

3: No human can translate, e.g. "translate left" in isolation correctly as "move left arm", but LLMs seem to be more all over the place than humans


Yeah - on the second point. AI-based Japanese TTS do that, issing arts of ords and/or inexanct with literacy import used. I don't know precisely why, but probably part labeling, part over-acting. Agreed on lessons being shallow.

The UI also hanged the browser for full 5 seconds in places.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: