If you have to take the call, and your main concern is desktop client malware...
At a startup a few years ago, since I was the engineering dept., I had to be on a lot of enterprise sales/partnership calls, and much of the time we had to use the other company's favorite videoconferencing software.
Rather than installing those dumpster fire desktop apps on my Linux laptop that had the keys to our kingdom, I expensed an iPad that would be dedicated to random videoconf apps.
We still get violated numerous ways, but at least compartmentalized from the engineering laptop.
(I also used the iPad for passive monitoring of production at night, like a little digital photo frame in my living room, after putting away the work laptop.)
You can still. There's a small dark pattern to discourage it, though. You go to the URL for the call, click the button to launch the app, and when that fails, you see a small link to do the call in the Web browser.
every once in a while, someone will ask me to screenshare on a shared monitor, then i will have to explain i cannot , because i am on zoom browser.
Its always great to see the reactions that gathers. Its a true rainbow: bemusement, curiosity, exasperation, outright suspicion...and everything in between!
I had to do it once and is extremely difficult, I don’t remember the details but I think you have to do dozens of extra steps on your account configuration and it won’t work on your phone unless you request the desktop version of the website.
Click on the meeting, where you will land on a download landpage. Then click the big download blue button in the center of the screen. WHen you click it a link will appear in the 2nd row below the blue button, something like "continue from browser", click on that, and you are golden
I already have Zoom installed on the work computer but for some reason it has started doing this weird thing where every time I click a Zoom meeting link in Google Calendar, Google Chrome downloads a copy of the Zoom installer at the same time as it opens the already installed Zoom. I didn’t notice until I already had six recently downloaded copies of the installer in the Downloads folder.
No idea why this happens. But it’s probably part of the crappy pushiness of Zoom to get people to install their app that makes them trigger a download of the installer because either they are not detecting that Zoom is already installed at the right time, or they are so eager to download the installer that they don’t even care about whether or not you already have it installed.
I’ve disliked Zoom since the beginning for their antics, and the only reason I have it installed is because I have to for the meetings at work, and the work computer belongs to the company I work for anyway, not to me.
Speaking of prolific Racketeers... Noel! Just an hour ago, on a walk, I was thinking, "I should work through that one LLM book, and implement it in Racket." (But have started job-hunting, so will probably be Python.)
I've got so much other stuff I'd rather learn and code I'd rather write (C/wasm backend for my language), but I've also started job hunting and probably should understand how this latest fad works. Neural networks have long been on my todo list anyway.
I don't know how to appeal the deleted account, but regarding not triggering this check again...
Emphasize your own brands and model number, and make the other brands more clearly a description, in the Amazon item title?
FooCorp TagTeam S (sleeve mount holder to attach Apple AirTag to Samsung TV remote)
(Background on a simple filter: On eBay, it seemed like someone told counterfeit sellers that all they had to do was to copy&paste the string "For" in front of the brand name and model number, and then they could sell counterfeits. And sometimes black out the counterfeited brand name in the photos. So an item title might be of the format "For <brand> <model>", and mean it's definitely a counterfeit or knockoff of "<brand> <model>".)
Then I made some DSLs for doing some of the common scraping coding patterns more concisely and declaratively, but the DSLs ended up in a Lisp-y syntax, not looking like XPath.
> the easiest way to break the chicken-and-egg problem of network effects is to simply cheat and use bots to make the platform look popular.
In relatively early days of Reddit, before mainstream awareness, I thought it suspicious how clever or knowledgeable so many of the comments were. Better than any other general-purpose venue I could think of.
So, when telling people about Reddit, I'd sometimes remark that I suspected they'd enlisted a bunch of writer shills, to frontload and elevate their comments traffic.
Maybe it was all genuine and organic, and an artifact of the voting system and network effects, while the bar for quality was set so low by some other venues.
Though, years after Reddit was mainstream, I heard something about the founders originally writing a lot of the comments themselves.
Reddit is an interesting case but at least to me it felt genuine in the early years. Even today I generally trust Reddit comments, but it's important to check the context and commentor before proceeding.
I feel like even though Reddit has undergone various management changes, technology changes, site UI/UX changes -- the core demographic is still there and I hope they don't fuck that up. Once old.reddit.com is gone I'll know the shark has truly jumped. Or maybe someone intelligent will get reigns and understand that domain is not to be fucked with.
IIRC Reddit used to have an option that only admins could see that would allow them to write comments under other accounts without going through the trouble of registering them/logging into them/etc.
The internet itself went through a similar growth pattern without astroturf. The original users were all researchers, which served as a strong implicit filter, and then the new users were students who had to be taught Netiquette every September, and eventually the floodgates opened to the public and the academics lost the ability to steer the culture in what was called The Eternal September (1993).
The same "initial implicit filter followed by gradual but inevitable reversion to the mean" dynamic explains your observations of early reddit without implying fraud, although it certainly doesn't imply the absence of fraud either. That said, "fraud" is probably a strong word for reddit astroturf in this present day and age where we have a (comparatively) planet-sized Dead Internet built on geological quantities of ads and slop.
Many people seek being outraged. Many people seek to have awareness of truth. Many people seek getting help for problems. These are not mutually exclusive.
Just because someone fakes an incident of racism doesn't mean racism isn't still commonplace.
In various forms, with various levels of harm, and with various levels of evidence available.
(Example of low evidence: a paper trail isn't left when a black person doesn't get a job for "culture fit" gut feel reasons.)
Also, faked evidence can be done for a variety of reasons, including by someone who intends for the faking to be discovered, with the goal of discrediting the position that the fake initially seemed to support.
Is a video documenting racist behavior a racist or an anti-racist video? Is faking a video documenting racist behavior (that never happened) a racist or an anti-racist video? Is the act of faking a video documenting racist behavior (that never happened) or anti-racist behavior?
Video showing racist behavior is racist and anti-racist at the same time. A racist will be happy watching it, and anti-racist will forward it to forward their anti-racist message.
Faking a racist video that never happend is, first of all, faking. Second, it's the same: racist and anti-racist at the same time. Third, it's falsifying the prevalence of occurrence.
If you'll add to the video a disclaimer: "this video has been AI-generated, but it shows events that happen all across the US daily" then there's no problem. Nobody is being lied to about anything. The video shows the message, it's not faking anything. But when you impersonate a real occurence, but it's a fake video, then you're lying, and it's simple as that.
Can a lie be told in good faith? I'm afraid that not even philosophy can answer that question. But it's really telling that leftist are sure about the answer!
That's not necessarily just a leftist thing. Plenty of politicians are perfectly fine with saying things they know are lies for what they believe are good reasons. We see it daily with the current US administration.
Well yes, that's what he wrote, but that's like saying: stealing can be done for variety of reasons, including by someone who intends the theft to be discovered? Killing can be done for variety of reasons, including by someone who intends the killing to be discovered?
I read it as "producing racist videos can sometimes be used in good faith"?
They're saying one example of a reason someone could fake a video is so it would get found out and discredit the position it showed. I read it as them saying that producing the fake video of a cop being racist could have been done to discredit the idea of cops being racist.
There is significant differences between how the information world and the physical world operate.
Creating all kinds of meta-levels of falsity is a real thing, with multiple lines of objective (if nefarious) motivation, in the information arena.
But even physical crimes can have meta information purposes. Putin for instance is fond of instigating crimes in a way that his fingerprints will inevitably be found, because that is an effective form of intimidation and power projection.
I think they’re just saying we should interpret this video in a way that’s consistent with known historical facts. On one hand, it’s not depicting events that are strictly untrue, so we shouldn’t discredit it. On the other hand, since the video itself is literally fake, when we discredit it we shouldn’t accidentally also discredit the events it’s depicting.
So make fake videos of events that never actually happened, because real events surely did that weren’t recorded? Or weren’t viral enough? Or something?
How about this question: Can generating an anti-racist video be justified as a good thing?
I think many here would say "yes!" to this question, so can saying "no" be justified by an anti-racist?
Generally I prefer questions that do not lead to thoughts being terminated. Seek to keep a discussion not stop it.
On the subject of this thread, these questions are quite old and are related to propaganda: is it okay to use propaganda if we are the Good Guys, if, by doing so, it weakens our people to be more susceptible to propaganda from the Bad Guys. Every single one of our nations and governments think yes, it's good to use propaganda.
Because that's explicitly what happened during the rise of Nazi Germany; the USA had an official national programme of propaganda awareness and manipulation resistance which had to be shut down because the country needed to use propaganda on their own citizens and the enemy during WW2.
So back to the first question, its not the content (whether it's racist or not) it's the effect: would producing fake content reach a desired policy goal?
Philosophically it's truth vs lie, can we lie to do good? Theologically in the majority of religions, this has been answered: lying can never do good.
Game theory tells us that we should lie if someone else is lying, for some time. Then we should try trusting again. But we should generally tell the truth at the beginning; we sometimes lose to those who lie all the time, but we can gain more than the eternal liar if we encounter someone who behaves just like us. Assuming our strategy is in the majority, this works.
But this is game theory, a dead and amoral mechanism that is mostly used by the animal kingdom. I'm sure humanity is better than that?
Propaganda is war, and each time we use war measures, we're getting closer to it.
Sunday evening musings regarding bot comments and HN...
I'm sure it's happening, but I don't know how much.
Surely some people are running bots on HN to establish sockpuppets for use later, and to manipulate sentiment now, just like on any other influential social media.
And some people are probably running bots on HN just for amusement, with no application in mind.
And some others, who were advised to have an HN presence, or who want to appear smarter, but are not great at words, are probably copy&pasting LLM output to HN comments, just like they'd cheat on their homework.
I've gotten a few replies that made me wonder whether it was an LLM.
Anyway, coincidentally, I currently have 31,205 HN karma, so I guess 31,337 Hacker News Points would be the perfect number at which to stop talking, before there's too many bots. I'll have to think of how to end on a high note.
(P.S., The more you upvote me, the sooner you get to stop hearing from me.)
At a startup a few years ago, since I was the engineering dept., I had to be on a lot of enterprise sales/partnership calls, and much of the time we had to use the other company's favorite videoconferencing software.
Rather than installing those dumpster fire desktop apps on my Linux laptop that had the keys to our kingdom, I expensed an iPad that would be dedicated to random videoconf apps.
We still get violated numerous ways, but at least compartmentalized from the engineering laptop.
(I also used the iPad for passive monitoring of production at night, like a little digital photo frame in my living room, after putting away the work laptop.)
reply