Hacker Newsnew | past | comments | ask | show | jobs | submit | najarvg's commentslogin

Grandpa rats shaking their paws in anger going "It's all of 'em darned video games spoiling this generation of rats! Back in my days..."


If you add in the 1000$ that treasury plans to invest starting next year, that is $1250 compounded at 5% annually after 18 years to $3008.27. It's probably still not a "head start" given that inflation is assumed to nominally rise at 2.5 to 3.5% annually and will take a bite out of what the real value is worth in 18 years. Good intentions but misplaced as others have stated. Investing in other ways to provide upward economic mobility will provide much better ROI for the society than allowing most of the wealth to accrue to a handful of people


> If you add in the 1000$ that treasury plans to invest starting next year, that is $1250

This is largely separate from your point, which is good, but the $250 is for kids that won't get the $1000. The $1000 only goes to kids born between 2025 and 2028.


Real quick, the $1000 530A account, if you put in just $1/day, $30/mo, on top of that account, then you get out ~$12,000 at the end of 30 years (assuming 5% interest rate). Which, yeah, that's enough to start a very small business (lawncare, blacksmithing, etc).

The stock market is at ~9.5% returns historically, inflation is likely at ~3% historically, so assume a little higher at 6.5% and that $1000 with a dollar a day increase is then ~$14,800, inflation adjusted.

If you go up to ~$100/mo at 6.5%, then you get ~$42,000, which is an honest start to a small business or college tuition.

The little extra per month really adds up here!

I may not like the administration for a lot of things, but this is one thing that I can really get behind.


Fascinating. Thanks for sharing! Sometime back I had run into a related experiment where the author setup a simple 1 layer NN with a shift-register feedback and explored the state space of neuron activations over large iterations. The observation was beautiful in that the state space maps traced out attractors. See here if you are curious - https://towardsdatascience.com/attractors-in-neural-network-...


Also beautiful. Thank you for sharing it on HN.


On the fly 3D printing gun that uses a biocompatible thermoplastic to heal complex bone fractures


Astonishing! Thank you very much for sharing.. This sentence really stuck out for me - "I was proud! I was tired! I was amazed that all those things I received are all around us, everywhere, all at once – if you know where to look. :O"


This was the nearest reference I could find. Links to an unofficial pytorch implementation on Github are also linked in the threads somewhere - https://www.reddit.com/r/LocalLLaMA/comments/1i0q8nw/titans_...


Seems to be windows only. Also worth noting that the all the currently supported models are 8B or less based on the readme on github


Sample size of 1 so caveat emptor! In my household YT shorts on TV is used precisely for that reason by my kid who does not have a dedicated device. I personally like to watch travel videos or train videos which display a lot better on a larger screen TV since they are essentially documentaries with lots of visuals that should be "experienced"


Not hands on with Django (or other Python based frameworks) so pardon the basic question. How do the speed of the generated application compare with the speed of a generated rails application? I know the latter has made some strides recently..


I built a data app one time and one of my devs spent a lot of brain cycles trying to get data to process a little bit faster.

We were getting data through a partner who restricted our data access through an API, where we were limited to 100 records per call. Turns out discussing Spark vs Duck DB isn’t helpful if 99.9% of your software latency is from having to make 750,000 HTTP calls every weekend to run your BI pipeline. For the record, that API was a Rails app - but it was certainly not the fault of the framework in that scenario.

Point being, for web apps, I don’t think it matters unless you’re in the top 100 websites, and even then it probably doesn’t. Complaints about htmx efficiency always confused me for this reason. Your app isn’t slow because you rendered HTML instead of JSON.

Sorry to digress, others may know better than I do but this one is just my experience.

The only time I’ve run into computation speed bottlenecks in either was doing data analysis in Python, and you usually just bring in Python libraries that aren’t written in Python like polars or DuckDB. Sounds dumb, but it works pretty well.

Standard practice has always been for me that long running tasks get sent to a job queue anyways. So ideally nothing in the UI is dependent on something that is long running. But again, in my own work any long running tasks is almost always limited by network latency.


Python is generally faster than Ruby, especially in the newer versions. That said, we’re still talking about two of the slowest languages out there, so the performance gap probably isn’t that big.


This was generally true before the introduction of YJIT, but with YJIT, Ruby's performance has improved significantly and may even outpace Python in some scenarios[1].

[1]: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


Yeah, Python 3.13 got some great performance improvements as well. But both languages are quite sluggish compared to something like Go and even Node. Also, while I like Ruby, other than RoR, I wouldn’t opt for it to build something.


ARPA-H, FWIW, is setup within the NIH. But it is structured to focus on more of "breakthrough" type processes, procedures, devices etc. Also the ARPA-H proposals are not driven solely by peer-review scoring unlike most NIH funding mechanisms. Typically ARPA-H tends to me across specific disease areas and more general in nature


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: