Hacker Newsnew | past | comments | ask | show | jobs | submit | sroussey's commentslogin

My car does not have software. Certainly no screens. Thank god.

Probably one most people never ask, though should be obvious to those on this forum.

People don’t like claims with no source. This is not Reddit.

For sources:

Overview of studies, including human:

https://www.xiahepublishing.com/m/2835-6357/FIM-2024-00006

NIH:

https://pubmed.ncbi.nlm.nih.gov/21889885/

Most recent I could find:

https://www.researchgate.net/publication/51617171_Tiliroside...


Robolympics.ai

I want to use this on a website!

Certainly not better… it’s a one person project after all… but I have a workflow in typescript solution, not quite ready for prime time at workglow.dev. I’ll have AI agent stuff both in the framework and the UI (it’s feature flagged off at the moment) in January/February time frame.

The site above only runs local in browser models and uses a local user account. So it’s easy for infinite people to play with and costs me nothing to host.

It’s still a ways away from a Show HN post, and is more capable with remote frontier models, or with gguf over onnx (maybe?) whenever I get the local app out.


Why would “Financial Strategy Group, Ltd” be redacted?

the probability distribution the model outputs is identical under identical conditions.

A local model running alone on your machine will 100% always return the exact same thing and the internal state will be exactly the same and you can checkpoint or cache that to avoid rerunning to that point.

But… conditions can be different, and batching requests tends to affect other items in flight. I believe Thinking Machines had an article about how to make a request deterministic again without performance going to complete crap.

I tend to think of things this way (completely not what happens though): what if you were to cache based on a tensor as the key? To generate a reasonably sized key what is an acceptable loss of precision to retrieve the same cache knowing that there is inherent jitter in the numbers of the tensor?

And then the ever so slight leak of information. But also multiplied since there are internal kv caches for tokens and blah blah blah.


Ones with better answers. Twitter dumbs down grok.

Nice. I have a simple system for typescript [1] where you can string tasks together like:

import { Workflow } from "@workglow/task-graph"; const workflow = new Workflow(); workflow .DownloadModel({ model: ["onnx:Xenova/LaMini-Flan-T5-783M:q8", "Universal Sentence Encoder"], }) .TextEmbedding({ text: "The quick brown fox jumps over the lazy dog.", }); await workflow.run();

It automatically caches results of steps if you say `new Workflow({outputCache});`

PS: the example above uses a local onnx model (it switches on model name) which is a good candidate for caching, but can be anything.

You can play around writing this way if you open the console at the web example [2] which has fun console formatters not enough people know about or use in DevTools. You can write code in the console and the example rewrites it as a json format in the web page (and visa-versa).

Or just use the free web app with local user account and local models to play around with. [3]

[1] https://github.com/workglow-dev/workglow

[2] https://workglow-web.netlify.app/

[3] https://workglow.dev


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: