Hacker Newsnew | past | comments | ask | show | jobs | submit | bigstrat2003's commentslogin

Yeah, Micro Center is awesome. They have become my first stop when I'm buying PC hardware. They aren't always the cheapest, but they are never so much more that it's a big deal, and I value having a local shop so it's well worth paying slightly more. Heck, the staff is even reasonably knowledgeable and has been able to help me out on occasion when I don't know something. I never had the opportunity to go to Fry's, but Micro Center is what I imagine they must've been like.

> I think this is where most technical writing is heading...

Not if you want anyone to actually bother reading it. I want to read what you have to say, flaws and all. Not what comes after the slop machine did a pass on your work.


I would remove the AI stuff entirely. At best it's not useful for students to learn about AI, at worst it's actively harmful (because they will rely on it and never learn). AI isn't yet to the point where it's actually a useful tool, so it doesn't make sense to devote learning time to it.

There's nothing about using permissive licenses that reduces freedom. Even if someone makes a closed fork of some software down the line, the original will always be there and will still be just as free. Comparing permissive licensing to a loss of freedom is not a valid comparison.

> Even if someone makes a closed fork of some software down the line, the original will always be there and will still be just as free.

Like MinIO, Solaris, Elasticsearch, Hashicorp Suite and countless others. The versions before the license changes are healthy as a doornail. You're absolutely right.

Some of them are re-forked, some did not.

Also, sometimes that closed fork is the only viable option, making the hardware it's running on an expensive doornail. I also don't like that.

I remember using SDKs and software forked from open ones with version numbers like "1.8.7-really1.9.0-internal-thishardwareonly-special-3.2.5-unlocked" which only runs on a distro from 2006 when it's full moon on 29th of February, and the sum of digits of the date is divisible by 7 and 11 at the same time.

Can you patch this? I guess you can, but where's the source? I bet somebody deleted it by accident and it's not present anymore.

Permissive licenses don't take away the four freedoms, but add a fifth one. The ability to take the other four away. Without prior notice. This is what I don't like personally.

In short, I don't like doornails which are not actual doornails. Permissive licenses enable that freedom.


There is absolutely nothing "pro-business" about permissive licenses. People choose permissive licenses for all kinds of reasons. For example, I personally use them because I believe they are more free and thus more in line with my values. You shouldn't project unsubstantiated statements onto people's motives like this.

> You shouldn't project unsubstantiated statements onto people's motives like this.

I am not criticising their motives, I am criticising the result!

Also, definitions are hard. It's why we have pro-choice/pro-life and not anti-choice/anti-life - using the positive spin is a good faith characterisation of a position.

In much the same way, I am using pro-user/pro-business; if my intention was to vilify one of those positions I would have used pro-user/anti-user or pro-business/anti-business to label those positions.

No reasonable interpretation of pro-user/pro-business can make the audience think that I am unfairly characterising either of two positions.

I say this to address the use of the word "unsubstantiated" in your assertion about my characterisations.


With permissive licenses you often run into the following situation:

You buy something physical from a company, say a humanoid unitree robot, a robot actuator or Arm SBC. These pieces of hardware come with their own proprietary SDK that they sell for a significant fee or a proprietary GPU driver without any hope of updates. The SDK heavily uses MIT licensed code and there is no possibility of modifying or inspecting the code for debugging.

From the perspective of the user, the system might as well be 100% proprietary and his freedoms are maximally restricted. You could say that this is fine since it doesn't detract from the original open source project, but you have to remember that these companies would ordinarily have to pay significant development fees to build the same level of functionality and they have no obligation to help or support your project financially. You as the open source developer will then have to beg them to hire you, so you can do paid work that is unrelated to the original project to finally work on your project in your spare time, purely because it is possible to charge for hardware but not the software that the hardware depends on.

What I'm trying to get at here is that this means full vertical integration is the only way. The problem is that most hardware companies are hardware companies first and they don't care about software. They concentrate on making hardware, because each sale brings in money. They don't spend money on software, because it appears to be optional. You can just tell the customer or an open source community to bring their own software. The money that is needed to pay for open source projects flows through the very companies that refuse to spend money on software.

If you want to write open source software, you must be a hardware company so you are customer facing and have access to customer money that can be diverted to the development of the software.


> I agree that mandatory developer registration feels too heavy handed, but I think the community needs a better response to this problem than "nuh uh, everything's fine as it is."

Why would the community give a different response? Everything is fine as it is. Life is not safe, nor can it be made safe without taking away freedom. That is a fundamental truth of the world. At some point you need to treat people as adults, which includes letting them make very bad decisions if they insist on doing so.

Someone being gullible and willing to do things that a scammer tells them to do over the phone is not an "attack vector". It is people making a bad decision with their freedom. And that is not sufficient reason to disallow installing applications on the devices they own, any more than it would be acceptable for a bank to tell an alcoholic "we aren't going to let you withdraw your money because we know you're just spending it at the liquor store".


What if we asked users if they want extra protection? I think that would be nice..

This is the status quo. APK installation is disabled by default, and there is a warning when you go to enable it.

The point is "a warning" is not enough to communicate to people the gravity of what they are doing.

It is not enough to write "be careful" on a bag you get from a pharmacy... certain medications require you to both have a prescription, and also to have a conversation with a pharmacist because of how dangerous the decisions the consumer makes can be.

Normal human beings can be very dumb. It's entirely reasonable to expect society to try to protect them at some level.


OK so make the warning more annoying. Have a security quiz. Cooldown period of one day to enable. Require unlock via adb connected to laptop.

There are alternative solutions if the true goal is maintaining user freedom while protecting dumb users. But that is not the true goal of the upcoming changes.


> Require unlock via adb connected to laptop.

Fine, just:

- Don't reset it every 5 days / 5 hours / 5dBm blip in Wi-Fi strength, because this pretty much defeats end-user automation, whether persistent or event-driven. This is the current situation with "Wireless Debugging", otherwise cool trick for "rootless root", if it only didn't require being connected to Wi-Fi (and not just a Wi-Fi, but the same AP, breaking when device roams in multi-AP networks).

- Don't announce the fact that this is on to everyone. Many commercial vendors, including those who shouldn't and those who have no business caring, are very interested in knowing whether your device is running with debugging features enabled, and if so, deny service.

Unfortunately, in a SaaS world it's the service providers that have all the leverage - if they don't like your device, they can always refuse service. Increasingly many do.


Sure, but I don't think decreasing chances of scam-by-app on Android by some minuscule amount is in any way comparable to prescription drugs.

I do? It's a trivially comparable thing? I'm not even talking about ALL prescription drugs. I'm talking about the fact that some have interactions that can kill you. Having "life savings gone" consequences from a random app install is that level of danger.

A non-trivial number of people should probably have to go see a specialist before being able to unlock sideloading in my opinion... which means we probably all would have to. It's annoying, but I actually care about other people.


I have a hard time with this because it's the world we've lived in forever. Everyone knows installing an "app" installs an executable.

Doesnt android require a specific permission to be user-accepted for an installed app to read notifications? I think it's separate from the post-notifications permission.

This seems to be an issue of user literacy. If so, doesn't it make more sense for a user to have the option to opt into "I'm tech illiterate, please protect me" than destroy open computing as we know it?


You can add 5 layers of "are you sure you want to do this unsafe thing" and it just adds 5 easy steps to the scam where they say "agree to the annoying popup"

You could even make this an installation-time option. If you want to enable the switch afterwards, you have to do a factory reset. Then, the attackers convincing the victims would get nothing.

Or make sideloading available only after 24 hours since enabling it. I would enable it on my new devices and wait 24 hours before installing F-Droid and other apps. Not a problem. Scammers might wait one day too but it decreases the chances of success because friends and family members can interfere.

But I'm afraid that this is security theater and the true goal is to protect revenues by making it hard or impossible to install apps that impact Alfabet bottom line (eg third party YouTube clients.)


> But I'm afraid that this is security theater and the true goal is to protect revenues by making it hard or impossible to install apps that impact Alfabet bottom line (eg third party YouTube clients.)

It's not just them. Every other SaaS, from banks to media providers to E2EE[0] chat clients to random apps whose makers feel insecure, or are obsessed with security [theater] best practices, just salivate at the thought of being able to check if you're a deviant running with root or debugging privileges, all because ${complex web of excuses that often sound plausible if you don't look too closely}. There's a huge demand for device attestation, remote or otherwise.

--

[0] - End-to-end Enshittified.


In the case of most of those business it's only because they must mark checkboxes on a regulation compliance sheet and/or deflect blame on someone else. The problem is that this is a never ending spiral of regulation after regulation and new ways to deflect blame so after device attestation will fail to solve all of their problems they'll end up pushing something else.

And now if I want to send a .apk to someone, they have to wipe their entire phone to install it? No thanks.

That's... brilliant. Enough work to not be able to talk it though over the phone to someone not technical. A sane default for people who don't know about security. And a simple enough procedure for the technically minded and brave.

It solves the 'smartest bear / dumbest human' overlap design concern in this situation.


Think about it the way you think about reading the fine print on agreements you sign. These can also have bad consequences.

But I guess not reading the TOS is another wide problem, also fueled by companies like Google.


then make the unlock cost money

relatively easy for devs, but hard to scale for scammers


It's either that or as suggested, hard require developer validation for specific API permissions.

the problem is that in developing countries smart phones are a massive technology jump for people who lack the education to even have a clue whats going on. treating people as adults does not work if they don't have the education needed for that.

these people aren't gullible. they are ignorant (in the uneducated sense). they are not making bad decisions. they are not even aware that there is a decision to be made.

and worst of all, this problem affects the majority of those populations. if more than half of our population was alcoholic then we absolutely would restrict the access to alcohol through whatever means possible.

it's a pandemic. and we all know what restrictions that required.


> Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron's cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience

-- C.S. Lewis


this is not about moral busybodies. it's not even a moral issue. it's an existential issue. this is about demands from the population to be safe from scams. those scammers ruin lives. do you think those people really prefer to be scammed and lose their life savings?

the correct solution is of course education, but education takes time. we can educate today's children so that they can protect themselves in the future. but that's the next generation. for the current generation that kind of education is to late.

the proposed solution is a stopgap measure. do you have a better idea how to solve the problem? (maybe putting more effort into persecution, but that costs money. or making banks responsible for covering the loss. but then you'll get banks demanding the protection. tyranny of the banks then? is that any better? that's actually happening in europe now.)

not doing anything will hurt a lot of people and make them unhappy. as a government you really don't want that either.


How about banks issue 2fa via authenticator apps instead of through sms notifications? Or they have out hardware tokens.

To add to that, I think it's important to point out that the problem of people not understanding how to safely use their devices is in big part caused by technology companies racing to get widest adoption everywhere, both in terms of location and in terms of industries. I'm not against "intuitive UX design" in general, but at it's extreme, it just fuels incompetence. We shouldn't now let them pick the most convenient option, the option that just happens to also increase their powers over the users, as a way to "fix" the problem.

I'm not against "intuitive UX design" in general, but at it's extreme, it just fuels incompetence.

how does it do that? (i am not getting hung up on "intuitive", i just mean you argue that the currently used design fuels incompetence)

how is a UI designed that doesn't fuel incompetence?

i have a hard time imagining what design aspects matter here, and how to improve upon them.


> how is a UI designed that doesn't fuel incompetence?

I'm specifically talking about UX ("how a user interacts with and experiences a product, system, or service"), not necessarily UI.

> how does it do that? (i am not getting hung up on "intuitive", i just mean you argue that the currently used design fuels incompetence)

tl;dr We have a product, we want to make money, we need people to use the product. One of the things that stand in the way, is people not understanding how to use our product. We will make sure they can get started as fast as possible, and not mention how they may hurt themselves with the product, that would scare them away. Hurting yourself with our product is in the broad "don't do stupid things" category. We will never explain the "framework" (in case of an OS I mean apps, that apps can interact with each other and your data, how you can or cannot, control that), even in broad terms. Just click this button and get your solution.

It started with PCs and people not understanding how to not lose their documents. Now that every device is connected to the internet, the problem became worse.

You can now say that "sideloading" is stupid anyway, but this is not the only problem. Another thing that people still usually learn by painful experience is backups. There are fake apps, on both stores. Another thing, in-band signaling. You cannot trust email, phones, whatsapp, messenger... Even if your friend you often chat with is messaging you, they could've just been hacked. Try to explain that you also cannot trust websites and that even technical people don't have a good way of telling if an email of a website is real.

But at least enrollment is fast and adoption metrics are growing. Since we are already in "move fast and break things" mindset, we will think about fixing such issues when it actually becomes a problem.

To be clear, I'm not saying that making technology easy is always bad, that you should always expose the user to "the elements" and expect them pipe commands in the shell. But I think that often the focus is on only making enrollment fast. "Get started"

What if we actually expected people to understand something about technologies they want to use?


What if we actually expected people to understand something about technologies they want to use?

but that's what we have now, and it's not working.

the implied question is: what if we don't allow people to use technology unless they can demonstrate that they understand it?

is that really something we want to do? this sounds like gatekeeping, elitism, and anti-innovation because if if less people are going to use a technology, then there is less motivation to build it.

remember, i think it was someone at IBM that said that the potential for computers is some small number? and then it grew beyond anyone's wildest expectations?

do you think that would have happened if we had required understanding before we let anyone buy a home computer?

besides education, i don't know how to approach this issue.


Cars worked fine without seatbelts too. Just because the world goes on doesn't mean we can't do better.

Taking a step back though, I suspect there are cultural differences in approach here. Growing up in Europe, the idea of a regulation to make everyone safer is perfectly acceptable to me, whereas I get the impression that many folks who grew up in the US would feel differently. That's fine! But we also have to recognise these differences and recognise that the platforms in question here are global platforms with global impact and reach.


OTOH the controlling way modern software behaves is an US artifact, so the differences are not necessarily clear-cut like this.

I grew up and live in Europe. I support the general idea of "regulation to make everyone safer" being an acceptable choice. At the same time, I vehemently oppose third-party interests reaching into my computing device and dictating what I can vs. cannot do with it.

But as you say, "global platforms with global impact and reach" - and so I can't set up my phone to conditionally read out text and voice messages aloud, because somewhere on the other side of the world, someone might get scammed into installing malware, therefore let's lock everything down and add remote attestation on top.

Unfortunately, the problem is political, not technological, and this here is but one facet of it. Ultimately, what SaaS does is give away all leverage: as users, it doesn't matter if we fully own the endpoints, or have a user-friendly vendor: any SaaS can ultimately decide not to serve a client that doesn't give the service a user-proof beachhead.


I really don't think that's a cultural difference. I also grew up and live in the EU. What Google wants just does not solve the problem in any way.

And it's also not actual regulation, just new TOS from a company many are basically forced to interact with.


It might not "solve" the problem, but I'd expect it to significantly address the problem no?

I've heard much criticism of it being too heavy-handed, but I don't think I understand criticism that it won't improve security. Could you expand on that?


No. You seem to be implicitly arguing that that unsigned apps are inherently less trustworthy than PlayStore apps. That's a claim that needs to be proven first. And based on the huge amount of documented data exfiltration performed by Google-approved apps, I'm going to say that claim is false.

There is some world where somebody scammed through sideloading loses their life savings, and every country is politically fine with the customer, not the bank, taking the losses.

But for regular people, that is not really the world they want. If the bank app wrongly shows they’re paying a legitimate payee, such as the bank, themselves or the tax authority, people politically want the bank to reimburse.

Then the question becomes not if the user trusts the phone’s software, but if the bank trusts the software on the user’s phone. Should the bank not be able to trust the environment that can approve transfers, then the bank would be in the right to no longer offer such transfers.


If the actual bank app does that, or is even easy to fool into doing that, then the bank should be responsible. That's the world "regular people" want and it's the world as it should be.

If random malware the user chose to install does that, then that is not the bank's fault. The bank is no more involved than anybody else. And no, I don't think "regular people" want to make that the bank's fault.


The legal infrastructure for banking and securities ownership has long had defaults for liability assignment.

For securities, if I own stock outright, the company has to indemnify if they do a transfer for somebody else or if I lack legal capacity. So transfer agents require Medallion Signature Guarantees from a bank or broker. MSGs thereby require a lengthy banking relationship and probably showing up in person.

For broker to broker transfers, there is ACATS. The receiving broker is in fact liable in a strict, no-fault way.

As far as I know, these liabilities are never waived. Basically for the sizable transfers, there is relatively little faith in the user’s computers (including phones). To the extent there is faith, it has total liability on some capitalized party for fraud.

These defaults are probably unknown for most people, even those with large amounts of securities. The system is expected to work since it has been set up this way.

Clearly a large number of programmers have a bent to go the complete opposite direction from MSGs, where everything is private keys or caveat emptor no matter the technical sophistication of the customer. I, well, disagree with that sentiment. The regime where it’s possible for no capitalized entity to be liable for wrongful transfers (defined as when the customer believes they are transferring to a different human-readable payee than actually receiving funds) should not be the default.


> Basically for the sizable transfers, there is relatively little faith in the user’s computers (including phones). To the extent there is faith, it has total liability on some capitalized party for fraud.

But that is expensive, so my impression is that for non-sizeable transfers, and beyond banking, for basically anything dealing with lots of regular people doing regular-people-sized operations, the default in the industry is to try and outsource as much liability onto end-users. So instead of treating user's computers as untrusted and make system secure on the back end, the trend is to treat them as trusted, and then deal with increased risk by a) legal means that make end-users liable in practice (keeping users uninformed about their rights helps), and b) technical means that make end-user devices less untrusted.

b) is how we end up with developer registries and remote attestation. And the sad thing is, it scales well - if device and OS vendors cooperate (like they do today), they can enable "endpoint security" for everyone who seeks to externalize liability.


Keeeep going.

Are banks POWERFUL? Do they have lots of money and/or connections to those who do? Do they have a vested interest in getting transactions right?

Absolutely!

Now, with all that money and power -- they -- whoever THEY are, need to come up with smart ways to verify transactions that don't involve me giving them all the keys to all my devices.

We have protections like this elsewhere - even when they have some "ownership." The bank kinda owns my house, but they still can't come in whenever they want.


Why do banks go through all the know-your-customer (KYC) process if not to identify the beneficial owner of every account? If they receive a transfer via fraud, then they either get it clawed back, have to pay it back, and/or get identified to law enforcement. If the last bank in the chain doesn't want to play by the rules, then other banks shouldn't transfer into them, or that bank itself should be held liable.

This is more or less how people expect things to work today ....


In the case of some knowing or blindfully unknowing money mule in the chain or at the end of the chain, the intermediary or final banks may not be at fault. The bank could have followed KYC procedures in that somebody with that name actually existed who controlled the account.

The money mule themselves is almost certainly insolvent to pay the damages. Currencies can also change by the money mule (either to a different fiat currency or crypto), putting the ultimate link completely out of reach of the originating country.

If intermediary banks are deputized and become liable in a no-fault sense, then legitimate transfers out become very difficult. How does a bank prove a negative for where the funds come from? De-banking has already been a problem for a process-based AML regime.


I'm a "regular" person, as are all the signatories, and you don't speak for us.

> At some point you need to treat people as adults, which includes letting them make very bad decisions if they insist on doing so.

The world does not consist of all rational actors, and this opens the door to all kinds of exploitation. The attacks today are very sophisticated, and I don't trust my 80-yr old dad to be able to detect them, nor many of my non-tech-savvy friends.

> any more than it would be acceptable for a bank to tell an alcoholic "we aren't going to let you withdraw your money because we know you're just spending it at the liquor store".

This is a false equivalence.


It's not a false equivalence at all. Both situations are taking away someone's control of something that they own, borne from a paternalistic desire to protect that person from themselves. If one is acceptable, the other should be. Conversely if one is unacceptable, the other should be unacceptable as well. Either paternalistic refusal to let people do as they wish is ok, or it isn't.

Maybe not, but I think that overextending any idea like that in the opposite direction of whatever point you are trying to make at least devolves into a "slippery slope" argument. For instance, is your point that all security on phones that impede freedom of the user (for instance, HTTPS, forced password on initial startup, not allowing apps to access certain parts of the phone without user permissions, verifying boot image signatures) should be removed as well?

No, that's not my point at all. Measures such as that are a tool which is in the hands of the user. There is a default restriction which is good enough for most cases, but the user has the ability to open things up further if he needs. What Google is proposing takes control out of the user's hands and makes Google the sole arbiter of what is and is not allowed on the device.

None of the measures I mentioned are changeable by the user, except possibly sideloading an HTTPS certificate. That's the only way any of those measures even work; if it wasn't set as invariants by the OS, they would be bypassable.

>There is a default restriction which is good enough for most cases, but the user has the ability to open things up further if he needs.

But this is what the other guy's point is. You are defining "good enough for most cases" in a way that he is not, then making the argument that what he says is equivalent to not allowing an alcoholic to buy beer. Why can you set what level is an acceptable amount of restriction, but he can't?


But it's not a slippery slope, because it's not taking it to the next level. It's the same level, just a different thing.

The alcoholic knows the bad outcomes, and chooses to ignore them. The hapless Android user does not understand the negative consequences of sideloading. I think this makes for a substantial differerence between those two.

> The hapless Android user does not understand the negative consequences of sideloading.

Then make sideloading disabled by default but enable it when the users tap 7 times on whatever settings item. At that time, explain those "negative consequences" to them, explain them real good, don't spare anything and if they still hit "Yes, continue to enable sideloading" you do that immediately in order to avoid increasing their haplessness with other made-up excuses.

Simple.


Protecting from scams isn't protection from the victim themselves. That should be obvious from the fact that very intelligent and technologically literate people too can fall for phishing attacks. Tell me for example, how many people in your life know how a bank would ACTUALLY contact you about a suspected hijacking and what the process should look like? And how about any of the dozens of other cover stories used? Not to mention the situations where the scammers can use literally the same method of first contact as the real thing (eg. spoofed). ...And the fact that for example email clients do their best to help them by obscuring the email address and only showing the display name, because that's obviously a good idea.

> Protecting from scams isn't protection from the victim themselves.

That is where we differ. It is, ultimately, the victim of a scam who makes the choice of "yes, this person is trustworthy and I will do what they say". The only way to prevent that is to block the user from having the power to make that decision, which is to say protecting them from themselves.


But the proposal here, requiring developers to register their identities, doesn't actually impact consumers at all. They still have the ability to make the decision about whether or not to trust someone.

None of these things requires "locking down phones." Every single thing you've mentioned can be done in a smarter way that doesn't involve "individuals aren't allowed to modify the devices they purchase."

You can't make a statement like that and provide no examples. What are some of your ideas for doing that?

> Life is not safe, nor can it be made safe without taking away freedom.

So... no food and safety regulations, because life is not safe, and people should have the freedom to poison food with cheaper, lethal ingredients because their freedom matters more?

You're right that things can't be made more safe without taking away the freedom to harm people. Which is why even the most freedom-loving countries on earth strike a balance. They actually have tons and tons of safety regulations that save tons and tons of lives, even you from your point of view that means not "treating people as adults". You have to wear a seatbelt, even if you feel like you're not being treated like an adult. Because it's also not just your own life you're putting at risk, but your passengers' as well.

You're taking the most extreme libertarian stance possible. Thank goodness that's an extremely minority view, and that the vast, vast majority of voters do actually think safety is important.


Thank goodness there are FOSS options, even for mobile phones, and none of us are required to accept proprietary junk.

If they make FOSS illegal, guess I’ll be a criminal. Come and take it.


Your post is addressing a strawman, not what I said. But to answer the words you so ungraciously put in my mouth:

> So... no food and safety regulations, because life is not safe, and people should have the freedom to poison food with cheaper, lethal ingredients because their freedom matters more?

This is harm to others and is very obviously something we should enforce. There are unreasonable laws about food (banning the sale of raw milk cheese for example, which most of the world enjoys with perfect safety), but by and large they are unobjectionable.

> You're right that things can't be made more safe without taking away the freedom to harm people. Which is why even the most freedom-loving countries on earth strike a balance.

I never said I was opposed to striking a balance. Of course we can strike a balance. Indeed we already have when it comes to installing apps on Android. But these measures are being advanced as if safety were the only consideration, which it isn't.

> You're taking the most extreme libertarian stance possible.

No, that is what you have projected onto me. That's not actually what my stance is.


When you say:

> Life is not safe, nor can it be made safe without taking away freedom. That is a fundamental truth of the world... Someone being gullible and willing to do things that a scammer tells them to do over the phone is not an "attack vector". It is people making a bad decision with their freedom.

That sounds pretty black and white extreme to me, when you talk about things like "life is not safe" and a "fundamental truth". I don't see any appreciation of balance there.

Maybe it's not what you meant to write, but your comment continues to absolutely come across as extremist and anti-balance to me. It seems like I was mischaracterizing what you actually believe (now that you've elaborated), but I don't think I mischaracterized what you wrote.


Your analogy is terrible because it doesn't do a proper accounting of "harm" and "risk."

Food and seatbelts, that's literal health and life-and-death; very immediate and visible.

"Cybersecurity" rarely is; and even when it is, the problem is that the centralized established authorities (like google) aren't at all provably good at this.


No, this is a terrible take. People’s entire financial future is at stake, including in the third world.

If those bad decisions have a lot of higher order effects and they turn out to be very costly for society, then limiting freedom seems worth it.

And it seems Google thinks society is beginning to unravel in SEA due to scammers. Trust breaks down, people stop using phones to do important things, GDP can shrink, banks go back to cheques, trees will be cut down!!

It's bad to let people go and catch the zombie virus and the come back and spread it, right?

...

I don't like it, but the obvious decision is to set up a parallel authority that can issue certificates to developers (for side loading), so we don't have to trust Google. Let the developer community manage this. And if we can't then Google can revoke the intermediary CA. And of course Google and other manufacturers could sell development devices that are unlocked, etc.


You say that until it happens to your mother/father/bf/gf/grandparent/…

Then we will see how you will react.


The reality in South East Asia doesn't support that. You're assuming that the potential victims are able to either use Android alternative or that they are willing and able to educate themselves about scams. The reality in these countries is that neither is the case in practice. Daily lives depend a lot on smartphones and they play a big role in cashless financial transactions. Networking effects play a big role here. Android devices are the only category that is both widely available and affordable.

Education is also not that effective. Spreading warnings about scams is hard and warnings don't reach many people for a whole laundry list of reasons.

The status quo is decidedly not fine. Society must act to protect those that can't protect themselves. The only remaining question is the how.

Google has an approach that would work, but at a high cost. Is there an alternative change that has the same effects on scammers, but with fewer issues for other scenarios?


The status quo may not be perfect but it is the best we can do. We try to educate people about scams. We give them warnings that what they are doing can be dangerous if misused. If they choose to ignore those things and proceed anyway, the only further step society could take is to take away the person's freedom to choose. And that is an unacceptable solution.

Society takes away individual's freedom to choose all the time. You can't choose not to pay your taxes. You can't choose to board a passenger plane without passing a security check. You can't just get a loan without any guarantees to the bank etc.

Education isn't really working at this global scale. It doesn't reach people the way you seem to belive it does. Many, if not most people are generally disinterested in learning new things and this gets amplified when it involves technology.


> The status quo may not be perfect but it is the best we can do.

Nope. We could, for example, ask developers to register with their legal identity to release apps.


The original post laids out why it's not possible to do well: privacy apps, sanctioned countries, apps made by people for themselves to avoid clouds and third parties, etc.

Simple example: I have a foss VPN app running on my phone to avoid censorship and surveillance in some countries I visit. While using this app is no problem, non-anonymous development might carry consequences to the developer in some dictatorship jurisdictions (which are plenty of). I'm not sure all devs of such system would be willing to give their ids.

Another example is that this way US can cut out countries and people they don't like from mobile usage (which basically equals to modern social life). Look into sanctioned judges of international court because US protects war criminals.


That would be worse than the status quo.

the open source community should ask for their own install key and that's it

Play store can be fast and verification based and the F/OSS stores can be slower, reputation and review based.

...

But fundamentally the easiest thing is to ask people to pay to unlock the phone's security barriers, this makes it harder and costlier for scammers.


This is a terrible response as a Software Developer by the way. You can just use this to ignore any security concern.

It signals that you don't care much about security, and that you don't care about non-technical users, and don't even have the capacity to see how they view a system.

Sure, you can analyze domain names effectively, you can distinguish between an organic post and an ad, you know the difference between Read and Write permissions to system files, etc...

But can you put yourself on the shoes of a user that doesn't? If not, you are rightfully not in a position as a steward of such users, and Google is.


> At some point you need to treat people as adults, which includes letting them make very bad decisions if they insist on doing so.

That's right, it's your decision to use Android. If you choose to do so, that's on you.


It's not like there's much of an alternative, but that's irrelevant anyway. Android is becoming more like an iPhone, and as long as the OS is able and willing to reliably report to anyone asking just how tightly it is locked down, we have zero choice in the matter, because increasingly many important apps (like bank and government apps) plain refuse to work if device is locked down less than it could be.

You're right, all Android users who are upset about this change are free to switch to iOS.

Right like someone who can only afford a $100 phone can buy the cheapest iPhone which is 5x more expensive.

This is about like the geeks who hate the idea of ad supported services and think that everyone should just pay for every service they use.

FWIW: I do exclusively buy Apple devices, pay for streaming services ad free tier, the Stratechery podcast bundle, ATP and the Downstream podcasts and Slate. I also pay for ChatGPT and refuse to use any ad supported app or game.


I think that OP's point was that the alternative is even more locked down. There is no option for people who don't want to be nannied.

If there was a choice to a non-walled garden. It has been taken away, how can you bank without one of the two?

Nice strawman. People want the ability to decide for themselves whether or not to install some APK, they are not saying every APK under the sun is trustworthy.

It is a simplification, not a strawman.

If you want to make the decision to install Hay Day, the user should be able to know that it is the Hay Day from Supercell or from Sketchy McMalwareson.

99.9% of apps should have no issue with their name being associated with their work. If you genuinely need to use an anonymously published app, you will still be able to do that as a user.


> If you genuinely need to use an anonymously published app, you will still be able to do that as a user.

I'm pretty sure the goal of Google's changes is to make it so you can't


Android already tells users when they're installing software from outside the Play Store and shows big scary warnings if Play Protect is turned off. What else do you want? If I want to install something from Sketchy McMalwareson after all that, that's my phone and my business.

Yeah, IMO the small standard library in Rust is a big mistake, one of the few the language has made. When push comes to shove the stdlib is the only thing you can count on always being there. It's incredibly valuable to have more tools in the stdlib even if they aren't the best versions out there (for example, even if I normally use requests in Python urllib2 has saved my bacon before), and it doesn't hurt anything to have them there.

I don't think the situation is that comparable to python, since in python the library has to be present at runtime. And with the dysfunctional python packaging there's potentially a lot of grey hairs saved by not requiring anything beyond the stdlib.

With Rust, it's an issue at compile-time only. You can then copy the binary around without having to worry about which crates were needed to build it.

Of course, there is the question of trust and discoverability. Maybe Rust would be served by a larger stdlib, or some other mechanism of saying this is a collection of high-quality well maintained libraries, prefer these if applicable. Perhaps the thing the blog post author hints at would be a solution without having to bundle everything into the stdlib, we'll see.

But I'd be somewhat vary of shoveling a lot of stuff into stdlib, it's very hard to get rid of deprecated functionality. E.g. how many command-line argument parsers are there in the python stdlib? 3?


On the other hand, a worse implementation in the stdlib can make it harder for the community to crystalize the best third-party option since the stdlib solution doesn't have to "compete in the arena".

Go has some of these.

Maybe a good middle-ground is something like Rust's regex crate where the best third-party solution gets blessed into a first-party package, but it is still versioned separately from the language.


> MIT is more flexible in its use than GPL, but doesn’t help ensure that software remains open.

Sure it does. The original software will always remain open. It isn't like people can somehow take that away.


GPL is copy left, it has a stated goal of encouraging more software to be OSS, including new contributions. That’s what I meant by software remains open. MIT on the other hand can be used in closed source situations. While the original code will remain open, future changes are not required to be open source.

> I'm going to be honest, you can feel the AGI when you use newer agentic tools like OpenClaw or Claude.

You're right. I can feel how far away it is and how these tools will in no way be capable of getting us there.


Are you using Claude Opus 4.6?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: