> If the company is <30 people, reach out to the CEO directly.
When the people you're interviewing with are 'already senior' (e.g. direct reports to the CEO), you can sometimes make your case worse rather than better, because it feels like you're going over their head.
So rather than size...
- If the interviewer(s) in question feel like you're trying to circumvent them, you're probably making your case worse.
- The kind of CEO that tends to meddle in things below their level might drag down your case even if they like you, because folks can develop a distaste for their meddling.
- Doing this for senior roles, or roles at small companies can actually be worse, because the person in question is more likely to be close in reporting chain to the CEO, who is more likely to directly meddle in your hiring process. Zero- or one-level removed can be the worst.
When the people you're interviewing with are 'already senior' (e.g. direct reports to the CEO), you can sometimes make your case worse rather than better, because it feels like you're going over their head.
If that happens then it's a very good thing - you do not want to work at a company where people are precious about how they succeed. If a great candidate (e.g you) drops into the inbox of the CEO who forwards it to someone else, and their first reaction is 'Well, they violated my personal kingdom by going over my head!' then that is a manager you do not need in your life.
I interpreted this post as being about how you get an interview in the first place, so the hope would be that the CEO forwards your mail to this senior person you're worried about.
Even still - a lot of senior folks, sadly, don't take it super well when candidates are forwarded their way by people above them when they're running a process.
Remember, that you may not know who the hiring manager is and there may not even be a relevant posted position. I've gotten lucky with just reaching out to very senior people at a couple of different companies (of very different sizes) over time.
I understood the OP to be saying "reach out to the CEO to express your interest in working for the company in order to get to the interview stage", not "email the CEO to make a case for being hired when you're already in the interview pipeline"
Very cool! I've definitely dreaded trying to make sense of the diverse infra every time we've needed to do this in the past. Several of these are quite simple, but every extra tooling combo in CI can be a real PITA.
They're threatening to remove servers from Italy. They're explicitly NOT threatening to block Italians from being able to access sites through Cloudflare.
I have my fair share of problems with CF, but I assume here that they're threatening higher latency (i.e. requests from Italian users would have to go to a neighboring country to be routed) rather than blocking.
How freaking expensive do you think infrastructure is? It's not that expensive, and certainly not anywhere close to the point where it would make a noticeable impact on GDP.
It's talking about Luau (gradually typed, https://luau.org/), not Lua.
Hopefully https://github.com/astral-sh/ty will make the Python typing situation better, but absent that, Python typing is... not great. Honestly even with that it feels subjectively very finicky.
icontract or pycontracts -like DbC Design-by-Contract type and constraint checking at runtime with or as fast as astral-sh/ty would make type annotations useful at runtime
> Mypyc ensures type safety both statically and at runtime. [...] `Any` types and erased types in general can compromise type safety, and this is by design. Inserting strict runtime type checks for all possible values would be too expensive and against the goal of high performance.
Both are also early Go engineers and developers who hacked on the Go stdlib for years. Most people in the Go community know them. Great people, and the idea speaks for it. I wish them best of luck.
Color me confused, but what do containers add to FreeBSD beyond jails? Jails have their own IP addresses and root filesystem, plus they use the host OS's version of libc and OpenSSL/LibreSSL and all the other core utils.
Is it the convenience utilities for building and running container images?
However, I still can't pinpoint what the value proposition is compared to using jails. Is there anybody around here able and willing to shed some light? I know I didn't use Cunningham's Law to start the conversation like a clever netizen but maybe, just this once, a good faith response to a good faith question is possible.
HN is a pretty simple, efficient monolithic web application. Some updates might need a restart. It's OK for some web requests to fail during that time. HN isn't life critical with sixtuple nine uptime requirements.
Tbh like 99% of web apps aren’t critical - most of them are for buying something or providing infrastructure to make it easier to buy something anyway.
It’s fine if your online shop is down for a few minutes (of course the business won’t see it like that but it’s true)
A sales site being down might lose you a sale. But the simplicity might save you so muh more than that loses you. And often the complexion of high availability infrastructure results in more downtime than it prevents.
For stuff like HN, I like the peek behind the scenes it provides. It's all just software written by some humans and way too often people take themselves and their shitty software way too serious.
I feel like this obsession with zero downtime has gotten a bit silly. Sure, for some things it's damn near required (though I imagine that's fewer things than most people think), but it 100% does not matter even a little bit if HN is unavailable for 10 seconds or so.
Everyone went right for “downtime” - no, that’s not an issue. I would have expected a configuration change, which wouldn’t require a restart but might indeed result in downtime.
Or even a configuration change that some control system notices and does restart the service.
It’s the manual, hands-on connotation (maybe only in my mind) of telling someone a restart is involved. Automate this stuff - don’t want the code in the server all year? Fine, have a process rebuild and relaunch on a schedule that makes sense. Might have downtime, but definitely have less hands-on.
Just to talk about a different direction here for a second:
Something that I find to be a frustrating side effect of malware issues like this is that it seems to result in well-intentioned security teams locking down the data in apps.
The justification is quite plausible -- in this case WhatsApp messages were being stolen! But the thing is... that if this isn't what they steal they'll steal something else.
Meanwhile locking down those apps so the only apps with a certain signature can read from your WhatsApp means that if you want to back up your messages or read them for any legitimate purpose you're now SOL, or reliant on a usually slow, non-automatable UI-only flow.
I'm glad that modern computers are more secure than they have been, but I think that defense in depth by locking down everything and creating more silos is a problem of its own.
I agree with this, just to note for context though: This (or rather the package that was forked) is not a wrapper of any official WhatsApp API or anything like that, it poses as a WhatsApp client (WhatsApp Web), which the author reverse engineered the protocol of.
So users go through the same steps as if they were connecting another client to their WhatsApp account, and the client gets full access to all data of course.
From what I understand WhatsApp is already fairly locked down, so people had to resort to this sort of thing – if WA had actually offered this data via a proper API with granular permissions, there might have been a lower chance of this happening.
I could certainly see the value in this in principle but sadly the labyrinthine mess that is the Apple permission system (in which they learned none of the lessons of early UAC) illustrates the kind of result that seems to arise from this.
A great microcosm illustration of this is automation permission on macOS right now: there's a separate allow dialog for every single app. If you try to use a general purpose automation app it needs to request permission for every single app on your computer individually the first time you use it. Having experienced that in practice it... absolutely sucks.
At this point it makes me feel like we need something like an async audit API. Maybe the OS just tracks and logs all of your apps' activity and then:
1) You can view it of course.
2) The OS monitors for deviations from expected patterns for that app globally (kinda like Microsoft's SmartScreen?)
3) Your own apps can get permission to read this audit log if you want to analyze it your own way and/or be more secure. If you're more paranoid maybe you could use a variant that kills an app in a hurry if it's misbehaving.
Sadly you can't even implement this as a third party thing on macOS at this point because the security model prohibits you from monitoring other apps. You can't even do it with the user's permission because tracing apps requires you to turn SIP off.
> Maybe the OS just tracks and logs all of your apps' activity
The problem here, is that like so many social-media apps, the first thing the app will do is scrape as much as it possibly can from the device, lest it lose access later, at which point auditing it and restricting its permissions is already too late.
Give an inch, and they’ll take a mile. Better to make them justify every millimetre instead.
We're not in 1980 anymore. Most people need zero, and even power users need at most one or two apps that need that full access to the disk.
In macOS, for example, the sandbox and the file dialog already allow opening any file, bundle or folder on the disk. I haven't really come across any app that does better browsing than this dialog, but if there's any, it should be a special case. Funny enough, WhatsApp on iOS is an app that reimplements the photo browser, as a dark pattern to force users to either give full permission to photos or suffer.
The only time where the OS file dialog becomes limited is when a file is actually "multiple files". Which is 1) solvable by bundles or folders and 2) a symptom of developers not giving a shit about usability.
I think you miss understood. If the OS becomes the arbiter of what can and cannot be accessed; it's a slippery slope to the OS becoming a walled garden that only approved apps and developers are allowed to operate. Of course that is a pretty large generalization, but we already see it with mobile devices and are starting to see it with windows and Mac OS.
I don't think we should be handing more power to OS makers and away from users. There has to be a middle ground between wall gardens and open systems. It would be much better for node & npm to come up with a solution than locking down access.
Meanwhile locking down those apps so the only apps with a certain signature can read from your WhatsApp means that if you want to back up your messages or read them for any legitimate purpose you're now SOL, or reliant on a usually slow, non-automatable UI-only flow.
...and this gives them more control, so they can profit from it. Corporate greed knows no bounds.
I'm glad that modern computers are more secure than they have been
I'm not. Back when malware was more prevalent among the lower class, there was also far more freedom and interoperability.
The virus-infested computers caused by scam versions of Neopets, are not dissimilar to Windows today.
Live internet popups you didn't ask for, live tracking of everything you do, new buttons suddenly appearing in every toolbar. All of it slowing down your machine.
It seems to me the only adequate solution regarding any of these types of security and privacy vs data sharing and access matters, is going to be an OS and system level agent that can identify and question behaviors and data flows (AI firewall and packet inspection?), and configure systems in line with the user’s accepted level of risk and privacy.
It is already a major security and privacy risk for users to rely on the beneficence and competence of developers (let alone corporations and their constant shady practices/rug-pulls), as all the recent malware and large scale supply chain compromises have shown. I find the only acceptable solution would be to use AI to help users (and devs, for that matter) navigate and manage the exponential complexity of privacy and security.
For a practical example, imagine your iOS AI Agent notifying you that as you had requested, it is informing you that it adjusted the Facebook data sharing settings because the SOBs changed them to be more permissive again after the last update. It may even then suggest that since this is the 5685th shady incident by Facebook, that it may be time to adjust the position towards what to share on Facebook.
That could also extend to the subject story; where one’s agent blocks and warns of the behavior of a library an app uses, which is exfiltrating WhatsApp messages/data and sending it off device.
Ideally such malicious code will soon also be identified way sooner as AI agents can become code reviewers, QA, and even maintainers of open source packages/libraries, which would intercept such behaviors well before being made available; but ultimately, I believe it should all become a function of the user’s agent looking out for their best interests on the individual level. We simply cannot sustain “trust me, bro” security and privacy anymore…especially since as has been demonstrated quite clearly, you cannot trust anyone anymore in the west, whether due to deliberate or accidental actions, because the social compact has totally broken down… you’re on your own… just you and your army of AI agents in the matrix.
That's the funny thing about those here in the spirit of Hacker News. We want to build – to hack.
It's all well and good for us all to use Linux to side-step this, but sometimes (shock, horror), we even want to _share_ those hacks with other people!
As such, it's kinda nice if the Big Tech software on those devices didn't lock all of our friends in tiny padded cells 'for their own safety'.
I don't really know what I'm doing, but. Why couldn't messages be stored encrypted on a blockchain with a system where both user's in a one-one conversation agree to a key, or have their own keys, that grants permission for 'their' messages. And then you'd never be locked into a private software / private database / private protocol. You could read your messages at any point with your key.
A huge fraction of the knee-jerk reactions here seem to miss the key point that the post is trying to get across:
> In the mid-2010s, during Furman’s tenure running economic policy under Obama, the company sold its defense business, offshored production, and slashed research, a result of pressure from financiers on Wall Street.
> Mesdag engaged in a proxy fight to wrest control of the company from its engineering founders, accusing one of its founders and iRobot Chairman Colin Angle of engaging in “egregious and abusive use of shareholder capital” for investing in research.
Yes Roomba sucks at this point. We get it. Thing is, if you slash research... that's what eventually becomes of your product.
This is what's wrong with investing overall: 1Q future blindness.
We'd have almost nothing if it weren't for university partnerships and corporate R&D way back when. There's no way to accomplish this now except to stay private.
Well, they took most of that money, and then just bought back their own stock. It's something more than just 1Q blindness and failure to understand the importance of research.
A company who does cutting edge R&D for defense contracts and and consumer small appliances is destined for trouble. They are two very different lines of business. While you might make an argument about synergy, the problem stems from the investors who are investing in two very different lines of business. Ultimately one of them was going to win. The failure to realize that offshoring would turn suppliers into competitors is a known issue in the consumer small appliance world and it looks like they were not ready.
Interestingly enough the R&D portion that was sold off, became Endeavor Robotics which was sold to Teledyne FLIR Systems and seems to be doing fine.
Their research wasn't on vacuum cleaners. It was building robots for the military and space. That's exactly what investors were complaining about -- the research wasn't leading to better vacuum cleaners. It was a distraction and not what investors wanted their money being used for.
It's crazy that the Dodge brothers destroyed the company/shareholder relationship for every contemporary and future US-based corporation and then died.
When the people you're interviewing with are 'already senior' (e.g. direct reports to the CEO), you can sometimes make your case worse rather than better, because it feels like you're going over their head.
So rather than size...
- If the interviewer(s) in question feel like you're trying to circumvent them, you're probably making your case worse.
- The kind of CEO that tends to meddle in things below their level might drag down your case even if they like you, because folks can develop a distaste for their meddling.
- Doing this for senior roles, or roles at small companies can actually be worse, because the person in question is more likely to be close in reporting chain to the CEO, who is more likely to directly meddle in your hiring process. Zero- or one-level removed can be the worst.
reply