Hacker Newsnew | past | comments | ask | show | jobs | submit | larssorenson's commentslogin

Your concerns aren't unfounded, but they're a bit misplaced. Password managers aren't intended to protect you from a local attacker, on your machine, like the malware you described. It is trivial to capture clipboard contents, as you say, but it's also similarly easy to keylog so your passwords would be exposed either way. If you consider your computer compromised or antagonistic like this, don't use it for anything sensitive.

Password managers are mostly intended to help facilitate unique passwords per account, to avoid password re-use which prevents credential stuffing. That is, if an attacker gets a hold of your password from one website they can't use it to log in everywhere.

Back to your concern, there isn't a solution for Windows in this space at the moment. Malware that's alive in your user context (or Satya forbid, SYSTEM) can do quite a bit thanks to Win32 APIs.


GMO's are not produced by targeted radiation in the way you have described, at least not as common practice (i.e. the GMO food you buy isn't created this way). GMO crops are generated in two ways: targeted gene modification (with deliberate modifications being made in contrast to the randomness of the radiation method you described) and crossbreeding (which has a more randomized effect but does not involve radiation).

If you look at the [Wikipedia article on genetic engineering techniques](https://en.m.wikipedia.org/wiki/Genetic_engineering_techniqu...) radiation doesn't appear once.


Creating new crop varieties using radiation is a thing, here are some wiki articles discussing it:

https://en.wikipedia.org/wiki/Atomic_gardening https://en.wikipedia.org/wiki/Mutation_breeding#New_mutagen_...

I also remember seeing a news article about using radiation to breed new rice variants that have more nutrients, but I can't find it anymore.


You're right, it is done. I worded my comment carefully to leave room for this because it's hard to prove a negative, but I stand by my statement, specificay in refuting the implication from the original comment: we aren't haphazardly blasting plant genome with radiation, at scalr, and guessing it's safe enough to feed to the world. I don't have numbers but GMO crops today are by and large the result of non-radiation genetic engineering.


It's worth drawing the distinction that Steam, as opposed to Apple and their app store, does not hold an exclusive monopoly and cannot dictate where users can install software from. If a Dev doesn't like Steam, there are other publishers and store fronts that they can peddle their wares through. Similarly users can go elsewhere to buy and install, even direct from the manufacturer.

Steam being the de facto choice is another issue entirely, and yet another discussion for their fee structure.


Also, steam is not as controlled as the apple store.

I bet, if nvidia wanted to, they could publish geforce now there, for example.


I had this happen recently with a finance website! Although technically the reverse, it silently stopped logging me in because the password field was changed to use HTML validation to enforce a max length of N, but they had previously accepted my password that was length N+1. Maddening.


Unfortunately short of forcing everyone to use an ORM I don't see how we can block the unsafe API, which I'm assuming to be the string-based query interface e.g. `conn.query("SELECT * FROM users")` since any interface that accepts a string will allow a dynamically constructed string which lets developers open themselves to injection attacks. Only ORMs AFAIK can prevent this, e.g. db().users.all() or db().users.select(name="bob").

Maybe there's a clever trick I'm missing here.


It'd be nice if the languages offered a way for the query-compiling function to require that the query strings given to it are static, compile-time strings.


Compromising the hash and salt, since they must be stored close together, makes it possible to identify if the salted hash is a password in a corpus of previously compromised passwords. An attacker can do Hash(PW, Salt) for all PW in a list of leaked/cracked plaintext passwords. If they've guessed your password and it's shared across multiple services, lateral compromise. Salting only prevents the rainbow table attacks, where an attacker precomputes all possible hash values for a known keyspace (like, say, 8 alphanumeric length passwords) and just look up for a match. Encryption is concerning because it necessitates the ability to decrypt since they're often inverse operations of each other, and presumably there's a shared key stored somewhere to do the comparison, which means it's likely trivial to recover the password compared to hash cracking and undermines any strength or complexity benefits. This also likely points to other bad behaviors utilizing this "feature", such as helpfully emailing you your plaintext password when you forget it.


Rainbow Tables are a specific innovation in time-space tradeoffs (precomputation) rather than the name for all such attacks.

The specific clever trick in Rainbow Tables is the observation that rather than storing

  hash(password) : password
  5f4dcc : password
  c2fe67 : jimmy
  25d55a : 12345678
... we can build a function that takes the output from hash(password) to deterministically create a new candidate password, let's call this function pass(hash), and then chain the hash and our new function together as many times as we want. This lets us store much less data, while doing more work during our look-up phase.

  hash(pass(hash(password))) : password
  153dfc : password
  92fe87 : jimmy
  213eea : 12345678
Now if I find a hash 92fe87 in a password hash file, I do not learn that the password was jimmy, instead I need to compute pass(hash(jimmy)) and that's the password I was looking for. And if I find 39a4e6 which isn't in my list, I calculate hash(pass(39a4e6)) and discover that's 213eea, then I look this up in the table and I discover the password I need was 12345678. Obviously real Rainbow Tables don't just run the hash twice like this, but instead some fixed number of times chosen by the creator to trade off less space versus more work to find a password.


I should actually fix this. What I've described above is basic "chaining", but Rainbow Tables are a further improvement still by Philippe Oechslin. The additional insight in Rainbow Tables is that we can reduce collisions in our hash-pass-hash-pass back and forth if we modify that pass function so that its behaviour varies by depth, this way if a collision occurs but at different depths in different chains (e.g. maybe the chain starting with password "password" hashes immediately to 5f4dcc but in another chain the value 5f4dcc is found for the password "j58X_m04" after six steps) the next call to pass() will diverge again, so the collision only wastes a small fraction of our precomputation effort. If the collision does happen at the same place in the chain, the final hash output will be identical to another chain, so it's easy to discover this problem and apply whichever mitigation seems appropriate.


Interesting, I haven't worked with rainbow tables very much since by the time I got into the world of hash cracking it had either been deprecated by salting or wasn't relevant (i.e. NTLM). That is a clever trick of trading back some of the space for extra time; I remember some of the rainbow table file sizes being ridiculous to the point of almost unusable haha.

edit: spelling


If one uses bcrypt for hashing passwords, as currently the best practice recommends, building basically a salted rainbow table becomes rather expensive, too. Not impossible, since the amortized cost for many common passwords is relatively low, but still sort of expensive.

Ideally a machine that generates and checks the hashes should be a box without a NIC, connected to the rest of the servers via a bunch of RS-232 ports. This would make extracting the salt much harder, down to effectively impossible. Few orgs can afford such a setup, though, due to the hassle of administering it.


> Not impossible, since the amortized cost for many common passwords is relatively low, but still sort of expensive.

This statement seems like it gravely underplays the numbers.

Traditional Unix crypt uses a 12-bit salt. So this means your precomputation (whether a Rainbow Table or not) is 4096 times more expensive. That's just about plausible though already uncomfortable ("Sorry boss, I know you said the budget was $10 but I actually spent forty thousand dollars").

But bcrypt uses a 128-bit salt. So now your precomputation is so much more expensive that if the equivalent ordinary brute force attack on a single password cost 1¢ and took one second on one machine, you'd spend a billion dollars per second, over a billion seconds, on each of a billion machines, and still not even have scratched the surface of the extra work you've incurred to do your precomputation.

Or to put it another way: Impossible in practice.


I see! But the case I'm arguing about is when the salt is known, pilfered along the password hashes during a compromise.


So what are you precomputing?

Rainbow Tables are a precomputation "time-space tradeoff" attack. You do a bunch of preparatory work which is amortized over multiple attacks and results in needing space to store all your pre-computed data. This is nice for two reasons:

1. You get to do all the hard work before your attack, leaving less time between the attack and your successful acquisition of the passwords compared to work that's necessarily done after stealing the credential database.

2. You can re-use this work in other attacks

But if you're waiting until you know the salt you don't get either of these advantages, so Rainbow Tables are irrelevant.

It's like if somebody mentions the F-14 fighter jet in a discussion about the fastest way to get from Times Square to Trump Tower. Yes the F-14 fighter jet is a fast aeroplane, but it can't go to either of those places so it isn't relevant whereas Usain Bolt is a very fast human so he really could run from one to the other.


You're probably thinking of the twitter account @justsaysinmice, https://twitter.com/justsaysinmice.

For the uninitiated and otherwise unaware, many studies regarding illnesses, disease, nutrition, exercise, medicine, etc. perform clinical trials with mice. Resultant publications are picked up by news outlets who headline the results, without the caveat that the results were only found in mice. Mice are, notably, not humans and the results don't (in fact, rarely ever) carry over 1:1.


It is worth noting that there exists legitimate RATs that offer the bullet point features you've highlighted. The marketing issue is another problem entirely; these legitimate projects are usually open source on GitHub or posted on public blogs.


> I think a lot of vegans drink soy and pea protein powders to get enough protein in their diet.

Actually a common misconception, most vegans get more than enough protein in their diet eating a relatively healthy/varietal diet. The exceptions would be people particularly concerned with their protein intake, namely athletes, bodybuilders/weight lifters specifically.


The previous discussion on HN highlighted Mens rea with regards to establishing intent, i.e. they didn't have criminal intent (they went in with the understanding that they had full rights and privileges to be there) and would probably (hopefully) be vindicated in a court of law. There is still a matter of CoalFire vs State / Local govt, where CoalFire might be liable for not confirming that they had the proper authorization to dispatch their testers (which is part of the state vs local issue that's ongoing).

IANAL but I've been tracking this pretty closely since I'm also a pentester; everything seems to reasonably indicate that the two guys should be released, but someone else is likely ending up in a court over this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: