> It can take a long time just to source all the records of what's being argued over,
It seems to me that if you can't timely procure your own records in a court case the case should be allowed to proceed with any assumptions based on them in your opponent's favor. Whats really the difference between taking 2+ years to procure a document and deleting that document?
Oftentimes the records aren't in the hands of either party and need to be subpoenaed. When you get them, they can open up entirely new lines of inquiry. Opposition will fight this tooth and nail so that the evidence can't be included, or they'll go on a fishing expedition under the guise of having all the facts on the table, and the court might just allow them. This process can take a very long time, and from what I've seen, the higher the stakes, the more the court will be willing to allow it to happen, so nobody can cry to the appeals court that something important was left out. Judges don't like their rulings overturned.
It's typically not a matter of having the documents, it's a matter of filtering them.
Suppose you have a corporate mail server with all your mail on it, and a competitor sues you. Your emails are going to be full of trade secrets, prices negotiated with suppliers, etc. Things that are irrelevant to the litigation and can't be given to the competitor. Meanwhile there are other emails they're entitled to see because they're directly relevant to the litigation.
What option do you have other than to have someone go read ten years worth of emails to decide which ones they get?
> Whats really the difference between taking 2+ years to procure a document and deleting that document?
The difference is obviously that they get the document in the 2nd+ year of the trial instead of never.
>What option do you have other than to have someone go read ten years worth of emails to decide which ones they get?
This funnily enough sounds like the exact use case of AI in streamlining timely, tedious, but important matters. Now the that someone simply needs to verify that the filtered documents are relevsnt.
Of course, I'm assuming a world where AI works on this scale. Or a world where this slow walking discovery isn't a feature for corporations.
> The difference is obviously that they get the document in the 2nd+ year of the trial instead of never.
Yeah, after using that year to make billions of dollars. That's how the current AI litigation is going. Once again by design. Pillage until the cows come home in 5-6 years.
> Now the that someone simply needs to verify that the filtered documents are relevsnt.
Now someone simply needs to verify that the filtered in documents are relevant and the filtered out documents are not relevant. But wait, that was the original problem.
If they are trusting AI to replace labor, they should trust AI to be accountable for bad filters. What happens when a human misses over a document or 2?
> If they are trusting AI to replace labor, they should trust AI to be accountable for bad filters.
Surely all of the AI hype is true and there are no hypocrites in Corporate America.
> What happens when a human misses over a document or 2?
If they were obligated to produce it and don't they can get into some pretty bad trouble with the court. If they hand over something sensitive they weren't required to, they could potentially lose billions of dollars by handing trade secrets to a competitor, or get sued by someone else for violating an NDA etc.
>Surely all of the AI hype is true and there are no hypocrites in Corporate America.
Worst case they are right and now we have more efficient processing. Best case, bungling up some high profile cases accelerates us towards proper regulation when a judge tires of AI scapegoats.
I don't see a big downside here.
>If they were obligated to produce it and don't they can get into some pretty bad trouble with the court.
Okay, seems easy enough to map to AI. Just a matter of who we hold accountable for it. The prompter, the company at large, or the AI provider.
There is an obvious downside for them which is why they don't do it. To make them do it the judge would have to order them to use AI to do it faster, which would make it a lot less reasonable for the judge to get mad at them when the AI messes it up.
> Just a matter of who we hold accountable for it. The prompter, the company at large, or the AI provider.
You're just asking who you want to have refuse to do it because everybody knows it wouldn't actually get it perfect and then the person you want to punish when it goes wrong is the person who is going to say no.
> I already said in the first comment that they have a financial incentive to stall the courts.
They have a financial incentive to not be found in contempt of court. And another financial incentive to not disclose sensitive information they're not supposed to disclose.
When false positives and false negatives are both very expensive, what's left is a resource-intensive slog to make sure everything is on the right side of the line. "Use the new thing that sacrifices accuracy for haste" is not a solution.
> I just want efficiency.
Asking for efficiency from the court system is like asking for speed from geology. That's not typically where you find that and if it is you're probably about to have a bad time.
The way you actually get efficiency is by having a larger number of smaller companies, so they're not massive vertically integrated conglomerates that you need something the size and speed of the US government to hold them in check.
>That's not typically where you find that and if it is you're probably about to have a bad time.
Why do we accept mediocrity from the government we pay our taxes to? They can't be as fast and lean as a small team, but there are surely optimizations we can make in process, especially as technology improves.
>by having a larger number of smaller companies, so they're not massive vertically integrated conglomerates that you need something the size and speed of the US government to hold them in check.
Agreed. Now I'd also like to have that sometime within my (maybe your) lifetime.
As someone who has built an e-discovery platform I can tell you that any delays these days are because they are helpful to minimize negative employer cash flow. In other words, exactly why corporate lawyers are paid.
The technology for legal review is extremely fast and effective.
IIUC, the cameras in a Tesla have worse vision (resolution) at far distances than a human. So while in the abstract your argument sounds fine; it'll crumble in court when a lawyer points out a similar driver would've needed corrective lens.
> The military is unfortunately chock full of functional alcoholics. As long as they don't get caught drunk on the job, seen partying too much, DIU, or admit anything to their doctor, they keep getting renewed their clearance.
Well yeah. If it's not affecting your job then what's it matter? If your a closet alcoholic then sure that's something the Russians could hold over you.
There's millions of people with clearances; that's impossible to staff at below market wages and also above average moral(?) standards.
And, within high-trust societies (eg Japan, Korea, Vietnam) getting wasted lubricates social bonds in the workplace. I've met successful functional alcoholics. Seriously, they actually function and make lots of money. They're also fun to be around as long as you're not working for them.
> If it's not affecting your job then what's it matter? If your a closet alcoholic then sure that's something the Russians could hold over you
Alcohol lowers inhibitions and alters decision making. Drinking a lot of alcohol more so than casual drinking. Frequently drinking a lot of alcohol has a very high area under the curve of poor decision making.
Functional alcoholism can come with delusions of sobriety where the person believes they’re not too drunk despite being heavily impaired.
So they’ll do things like have a few (or ten) drinks before checking their email. It makes them a better target for everything like fishing attacks, as one example.
It’s not just about enemies holding it against you.
I think you’re misunderstanding the threat model for why security clearance cares about impaired judgment of your off time, too. There’s more to these people’s lives than when they’re on the clock (figuratively speaking). Getting compromised anywhere is a problem.
I think you’re right. These are human systems always fighting the prior battle. Nowadays, it’s probably true that the threat from digital hygiene exceeds any intention to leak. The way that’s demonstrated is by the Secretary of Defense misusing Signal instead of being one level smarter and intuitively making the right messaging choice. The system is very much ready to build a preternaturally superimposing file on Pete Hegseth. But the system as a substitute for imagination is not elaborated to improve itself.
They don’t ask about any of that. If in a drunken blackout you find a USB drive on the subway and plug it in, the system is concerned about the blackout state and not the USB. It’s self preservation depends on telling the difference between incompetence and deception.
However, back when the constitution was amended the 5th amendment also applied to your own papers. (How is using something you wrote down not self-incrimination!?).
It only matters if one year in the future it is because all that back data becomes immediately allowed.
> See United States v. Hubbell. In Boyd v. United States,[60] the U.S. Supreme Court stated that "It is equivalent to a compulsory production of papers to make the nonproduction of them a confession of the allegations which it is pretended they will prove".
This opinion hasn't lasted the test of time but historically your own documents cannot be used against use. Eventually the supreme court decided that since corporations weren't people that their documents could used against them and then later that it also people weren't protected by their own documents.
Sure. My point is strictly say what you want to mean.
If you believe this is bad for society then say "I can't see how allowing others to profit from your tax refund is good" and not "How is this not reverse Byzantine tax farming?".
> The problem is that there is no real feedback mechanism between a what a congress person votes for and their electibility
You would describe this as being different from competitive?
I doubt any amount of money would matter if we had 1 representative per 30k people as written in the constitution, NY State is about 20 M people so you'd need to bribe ~300 of the ~600 representatives in order to get your way (and also do that for every other state).
yes, is there any evidence purple districts represent their constituents better? whats the different between being primaried in a 90% red district and running against someone of a different party in a swing district?
I've over the years began to interface with a lot of PHP code and there's a lot of really neat configuration stuff you can do. Ex. creating different pools for the incoming requests (so logged out users or slow pages are handled by the same pool). Like it seems to me for all of the rust web servers you have to still do a lot of stuff all on your own through code and it's not like you can create an existing Pool-ing struct.
I don't think it probably helps with a lot of the super easy stuff like creating a pool with a line of configuration - fair!
I (personally) would rather spend the fixed several hours of doing a few things like that manually, vs. pounding my head on the desk for impossible-to-find bugs.
I mean somebody could make a singular rust dependency that re-packages all of the language team's packages.
But what's the threat model here. Does it matter that the Rust STD library doesn't expose say "Regex" functionality forcing you to depend on Regex [1] which is also written by the same people who write the STD library [2]? Like if they wanted to add a back-door in to Regex they could add a backdoor into Vec. Personally I like the idea of having a very small STD library so that it's focused (as well as if they need to do something then it has to be allowed by the language unlike say Go Generics or ELM).
Personally I think there's just some willful blindness going on here. You should never have been blindly trusting a giant binary blob from the std library. Instead you should have been vendoring your dependencies and at that point it doesn't matter if its 100 crates totaling 100k LOC or a singular STD library totaling 100k LOC; its the same amount to review (if not less because the crates can only interact along `pub` boundaries).
[1]: https://docs.rs/regex/latest/regex/
> I mean somebody could make a singular rust dependency that re-packages all of the language team's packages.
That's not the requirement though! Curation isn't about packaging, it's about independent (!) audit/test/integration/validation paths that provide a backstop to the upstream maintainers going bonkers.
> But what's the threat model here.
A repeat of the xz-utils fiasco, more or less precisely. This was a successful supply chain attack that was stopped because the downstream Debian folks noticed some odd performance numbers and started digging.
There's no Debian equivalent in the soup of Cargo dependencies. That mistake has bitten NPM repeatedly already, and the reckoning is coming for Rust too.
> Wasn't that a suspected state actor? Against that threat model your best course of action is a prayer and some incense.
No? They caught it! But they did so because the software had extensive downstream (!) integration and validation sitting between the users and authors. xz-utils pushed backdoored software, but Fedora and Debian picked it up only in rawhide/testing and found the issue.
> Notably, xz utils didn't use any package manager ala NPM and it relied on package management by hand.
With all respect, this is an awfully obtuse take. The problem isn't the "package manager", it's (and I was explicit about this) it's the lack of curation.
It's true that xz-utils didn't use NPM. The point is that NPM's lack of curation is, from a security standpoint, isomorphic to not having any packaging regime at all, and equally dangerous.
> a Postgres dev running bleeding edge Debian
Exactly. Not sure how you think this makes the point different. Everything in Debian is volunteer, the fact that people do other stuff is a bonus. Point is the debian community is immunized against malicious software because everyone is working on validation downstream of the authors.
No one does that for NPM. There is no Cargo Rawhide or NPM Testing operated by attested organizations where new software gets quarantined and validated. If the malicious authors of your upstream dependencies want you to run backdoored software, then that's what you're going to run.
No? Who else has 2-3 years worth of time to become a contributior and maintainer for obscure OSS utils?
Plus made sockpuppets to put pressure on OG maintainer to give Jia Tan maintainer privilege.
> Exactly. Not sure how you think this makes the point different. Everything in Debian is volunteer, the fact that people do other stuff is a bonus.
What you mean exactly? This isn't curation working as intended. This is some random dev discovering it by chance. While it snuck past maintainers and curator of both Debian and Red Hat.
> Everything in Debian is volunteer, the fact that people do other stuff is a bonus. Point is the debian community is immunized against malicious software because everyone is working on validation downstream of the authors.
You can do same in NPM and Cargo.
Release a v1.x.y-rc0, give everyone a trial run, see if anyone complains. If they do, it's downstream validation working as intended.
Then yank RC version and publish a non-RC version. No one is preventing anyone from making their release candidate version.
> No one does that for NPM. There is no Cargo Rawhide or NPM Testing
Because, it makes no more sense to have Cargo Rawhide than to have XZ utils SID.
Cargo isn't an integration point, it's infra.
Bevy, which integrates many different libs, has a Release Candidate. But a TOML/XYZ library it uses doesn't.
Isn't xz-utils exactly why you would want a lot of dependencies over a singular one?
If say Serde gets compromised then only the projects depending on that version of Serde are as opposed to if Serde was part of the std library then every rust program is compromised.
> That mistake has bitten NPM repeatedly already, and the reckoning is coming for Rust too.
Eh, the only things that coming is using software expressly without a warranty (expectantly) will mean that software will cause you problems at an unknown time.
reply