Hacker Newsnew | past | comments | ask | show | jobs | submit | bee_rider's commentslogin

1060 6GB here. Figured the headroom would get me a couple extra years out of it. At this rate I’m wondering if the card is going to outlast the concept of owning graphics cards. Partly because, as you mention, maybe NVIDIA will stop selling them. Partly because, maybe APUs will get good enough…

About 3 years ago I got an RX 6750 XT with 12 GB of VRAM for $330 and I expect to be using that until either it dies or my computer's RAM dies and I don't have $10,000 to replace it. If only I'd maxed out all my DDR4 slots when DDR5 was the hot new thing and you could get it for cheap.

Strix Halo is already good enough. It's a premium product, though.

Strix Halo looks quite good. Hoping the stars will align, and my GPU will hold on long enough for the RAM famine to end and some Strix Halo successor to come out.

The AI bot wouldn’t be representing you any more than your text editor would be. You would be using an AI bot to create a lot of text.

An AI bot can’t be held accountable, so isn’t able to be a responsibility-absorbing entity. The responsibility automatically falls through to the person running it.


True. But it can help me create a lot of useful text so I can represent my self better.

I do wonder what happens when everyone is using agents for this, though. If AI produces the text and AI also reads the text, then do we even need the intermediary at all?


> do wonder what happens when everyone is using agents for this, though.

The company is going to use AI agents to read and respond too. Some botocalypse is going to happen at some point.


> Some botocalypse is going to happen at some point.

Yeah the bots can duke it out. As long as my time is saved.

For me the main concern is, before I have a stash of millions of dollars saved up, my medical expenses need to be paid for by the system, because I can't afford surprise bills. Hopefully the bots can fight more on my side in the near future.

Hopefully in the far future when the botocalypse happens I'll have saved up enough that insurance evading payment of $5500 won't be an issue for me, and/or I'll be of retirement age, don't need job opportunities anymore, and can go live in a country with better healthcare.

Call me selfish, but I don't control the insurance/medical system, I don't have space to think about more than protecting myself from it.


> I do wonder what happens when everyone is using agents for this, though.

Unless one is very cavalier with one's definition of "everyone", this is not going to happen.

There will always be a very significant cohort of people who are emphatically uninterested in replacing their own judgement and composition skills with an Averages Machine.


The bot doesn't need to be held accountable. It only needs to spew out the right text that triggers humans to rightfully transfer accountability from me to the insurance company.

> We need to know if the email being sent by an agent is supposed to be sent and if an agent is actually supposed to be making that transaction on my behalf. etc

Isn’t this the whole point of the Claw experiment? They gave the LLMs permission to send emails on their behalf.

LLMs can not be responsibility-bearing structures, because they are impossible to actually hold accountable. The responsibility must fall through to the user because there is no other sentient entity to absorb it.

The email was supposed to be sent because the user created it on purpose (via a very convoluted process but one they kicked off intentionally).


I'm not too sure what you're asking, but that last part, I think, is very key to the eventual delegation.

Where we can verify the lineage of the user's intent originally captured and validated throughout the execution process - eventually used as an authorization mechanism.

Google has a good thought model around this for payments (see verifiable mandates): https://cloud.google.com/blog/products/ai-machine-learning/a...


I see a lot of discussion on that page about APIs and sign offs, but the real sign-off is installing anything on your computer, and then doing things.

The liability is yours.

Claude messes up? So sad, too bad, you pay.

That's where the liability need sit.

And one point on this is, every act of vibe coding is a lawsuit waiting to happen. But even every act by a company is too.

An example is therac-25:

https://en.wikipedia.org/wiki/Therac-25

Vibe coding is still coding. You're giving instructions on program flow, logic, etc. My rant here is, I feel people think that if the code is bad, it's someone else's fault.

But is it?


It was more of a rhetorical question.

Anyway, that payment system looks sort of interesting. It seems to have buy-in from some of the payment vendors, so it might become a real thing.

But, you can give a claw agent your credit card number and have it go through the typical human-facing shop fronts, impersonating you the whole time and never actually identifying itself as a model. If you’ve given it the accounts and passwords that let it do that, it should be possible to use the LLM to perform the transaction and buy something. It can just click all the buttons and input the numbers that humans do. What is the vendor going to do, disable the human-facing shopfront?


Im not a fan of the payment use case & agree with your take, just a fan of the cryptographically verifiable mandate used throughout the process.

Apparently AWS sovereign cloud is designed to continue operating even if the US offices cut them off. The servers are in the EU and the people running them are subject to EU laws, not US ones.

Realistically a US executive could be legally required to give an EU engineer a command that they legally couldn’t follow. At that point I guess we find out if the engineers’ national or corporate identities are dominant. I suspect the former in most cases, but who knows?


The US exec probably doesn't want to order them either. So the game would be played and they did their best. There's another article about the US fighting data sovereignty requirements/laws in other countries, but that relies on their quickly dwindling soft power.

700 is actually a pretty good sample size unless you are looking at some tiny crosstab, or there’s some skew (which you won’t naively scale your way out of anyway).

It is also interesting to note that the comparison is between recent comments and recent comments by new users. So, I guess this would take care of the objection that em-dashes (a perfectly fine piece of punctuation) have just been popularized by bots, and now are used more often by humans as well.

Maybe there is a bot problem. Seems almost impossible to fix for a site like this…


I think what a larger sample size would do would be to help capture changes over time. Humans tend to be more active certain times of days, whereas bots don't tend to do that.

I think it is cynicism; at least, there’s an idea that once a company is dominant it should want regulation, as it’ll stifle competition (since the competition has less capacity for regulatory hoop-jumping, or the competition will have had less time to do regulatory capture).

Rather, a community could pass a law to prevent persistent filming of public locations—why not, right?

Well, not in the US since filming in public is (at least AFAIK) constitutionally protected. It's weird though, somehow two party consent for audio recording (even in public) seems to be accepted by the courts. Although it's entirely possible that I have a misunderstanding.

It is actually kind of hard to look this up: I get lots of search results about the right to record police being protected constitutionally. And the lack of an inherent right to privacy, when in public. But, this doesn’t seem to preclude a locality from creating a law that disallows recording of public locations, right? You may not have a constitutional right to safe air, but as far as I know states can pass their own environmental regulations…

(All US specific)


I haven’t done a ton of porting. And when I did, it was more like a reimplementation.

> We’ve verified that every AST produced by the Rust parser is identical to the C++ one, and all bytecode generated by the Rust compiler is identical to the C++ compiler’s output.

Is this a conventional goal? It seems like quite an achievement.


My company helps companies do migrations using LLM agents and rigid validations, and it is not a surprising goal. Of course most projects are not as clean as a compiler is in terms of their inputs and outputs, but our pitch to customers is that we aim to do bug-for-bug compatible migrations.

Porting a project from PHP7 to PHP8, you'd want the exact same SQL statements to be sent to the server for your test suite, or at least be able to explain the differences. Porting AngularJS to Vue, you'd want the same backend requests, etc..


It’s a very good way of getting LLMs to work autonomously for a long time; give it a spec and a complete test suite, shut the door; and ask it to call you when all the tests pass.

I strongly suspect it is just “obviously alive” things that have any sort of subjective experience. But we can’t really prove a negative, so we can thank our coffee machine spirits as a ritual, if we want.

You’d get defederated by instances that find that sort of thing objectionable, I guess. But, if you think it is a popular niche, couldn’t a separate community grow? That’s the whole promise of decentralization.

I do think its a popular niche but currently no one on the fediverse enjoys that stuff. But how can it grow when its rejected?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: