When someone is able to put something like this together on their own it leaves me feeling infuriated that we can’t have nice things on consumer hardware.
At a minimum Siri, Alexa, and Google Home should at least have a path to plugin a tool like this. Instead I’m hacking together conversation loops in iOS Shortcuts to make something like this style of interaction with significantly worse UX.
I feel like you could get pretty far with a raspberry pi and microphone/speaker. I think the hard part is running a model that can detect a "Hey agent" on-device, so that it can run 24/7 and hand off to the orchestrator when it catches a real question/query.
I think you’re right. I’ve been seeing more and more DIY hardware setups popping up. There are even wake work models for hardware as low powered as the ESP32.
In the middle of moving though so probably have to wait before taking on hardware.
It sounds like this attack would work in that scenario provided the attacker is able to connect to the guest access point.
I haven’t paid attention to one in a while but I seem to remember the need to authenticate with the guest network using Xfinity credentials. This at least makes it so attribution might be possible.
It looks like both clients must be on the same VLAN for the attack to work. They could be connected on different BSSIDs or even different SSIDs, but they still must be on the same VLAN.
I haven't paid a lot of attention to this issue but after reading some of the statements in the article I can't help but agree with Tech Oversight's conclusions. It's just anecdotal but recently, when mindlessly scrolling reels, (yes, bad enough already) I came across a reel that was unquestionably sexually explicit (in USA, I think policy varies on locale). I reported the account and reel because after clicking on the account there was even more material. This wasn't just a "creator" promoting their adult site with suggestive content. The account had several reels where the preview image was just black but after 2-3 seconds an adult image would appear.
Facebook closed my report with "no further action required" saying the content does not violate their policy. I'm sure they have an absolute tsunami of reports to go through and I do not envy the humans tasked with this work. However, it seems pretty clear to me they are not effectively achieving their publicly stated goals of moderating the content on their platform.
I know exactly what you’re talking about and it has been driving me nuts for years. They constantly go up there and say “it’s cool we totally have these amazing algorithms that solve the problem,” then when they don’t solve the problem they just shrug and go “well we’re just so big you can’t actually expect us to do what we said we would. We’re doing decent enough!” YouTube is another great example of this.
Fine, be smaller. If I own 10,000 apartment buildings and one of them collapses killing dozens and injuring more, I don’t just get to shrug and go “sorry folks, it’s not reasonable for you to expect me to follow all the rules on all my properties. I’m too big.”
"oh, we get so much content that we can't possibly review it all" then don't accept anymore content from anyone?
Honestly, the fact that these companies are too big is a big big concern. We should have limited their size long ago and never accepted that bullshit excuse.
I would say they have these algorithms. They just know they can do it because literally nobody is forcing them not to. They buy politicians in US and it seems like EU fines are too small for them and even sparks and outrage of US policitcans when applied.
I surely hope so they end up like Standart oil. Broken down into small companies, because this monopol is absolutely net negative value for society.
The additional problem to this is that they guided the industry and their own platforms to actively generate that much content. No effort was made to naturally or organically slow the creation or even perform any sort of de-duplication. So whatever argument they use for "we're too big, their is just too much content" is directly on them.
Social media is a slot-machine essentially, and in order to do that they had to mobilize and incentivize entire industries to revolve around generating millenias-worth of content.
It's certainly a stretch to describe that as killing people. You can argue it's a an acceptable stretch, or that it's still very bad even if described more accurately, but it is plainly not what 'killing people' traditionally means
Yes, it is. Anything with a wide userbase that worsens or even just intensifies mood will lead to elevated suicide rates. If your boss picks someone else for the promotion and you kill yourself over it, your boss didn't kill you. If you're attracted to someone and they marry someone else and you respond similarly, same answer. If your instagram friends post pictures of their happy lives and it makes you feel bad, etc.
You can broaden the definition of 'killing people' to include 'elevating their risk of killing themselves', but then you have to shed the intuitions that are the sole purpose of using that kind of language in the first place. It's a rhetorical sleight of hand.
We have decades of research now showing concretely the harmful effects of social media, especially for people under 18. It is not debatable. It directly harms broader society and individuals yet we continue to have incredibly thin regulations that are barely enforced.
Help me out here. This has been happening to me a lot lately and I have to assume it's a failure to communicate on my part. I only intended to dispute that this should be called 'killing people'. I tried to indicate that I still think it's bad even if not described in that way.
Is there something that I said or failed to say that lead you to believe otherwise, or are you intending this reply as an argument that it should be called 'killing people' in a way I'm not understanding?
And here I thought the internet was about the free exchange of ideas and knowledge. And free will was the ability to choose to use social media or not.
I don’t know why you’re getting sarcastic with me. I also imagine you’re aware of the addiction element but if not I can send some studies/research along.
Social media companies have decades of work and billions of dollars of research to pull from. They use every single trick and tool they can to make it an addiction. The dialogue and shared strategies between them and the gambling industry is enough of a red flag on its own IMO.
It’s not a fair fight. Asking someone to just stop using social media can be like telling a gambling addict to just stop gambling. You’re also expecting teenagers to exhibit that self control.
And I’m not even getting into how critical it is to use social media if you run a business. Hell at some companies you’re required to participate in their social media presence. You can’t simply make it go away. You may as well tell people to just stop buying a phone or a computer.
I mean I work in tech. I’m 39. I’ve been on social media my entire life. At one point I was addicted. I cut social media down to 3 hours a week now I’ve taken years off in the past. I’ve also beaten alcohol addiction after the loss of my son. I’ve also quit gaming because I couldn’t balance it. Should we ban gaming? Should we ban marketing? Billboards? I see booze everywhere and yet no one is saying people with alcohol addictions are harmed by these ads.
I understand the psychology of marketing and what these companies do to exploit that.
At the end of the day if it’s about the kids. The parents should be educated at this point.
When I was a “tween” I was building CGI blogs and myspacing before moving from Perl to PHP; in high school you needed a college email address to sign up for Facebook so that didn’t come for until after I graduated high school in 2004.
Sure things have changed but I find if I pause and reflect daily and stay in the moment I don’t ever doom scroll or need social media.
Even now this comment is only being written bc I’m taking a poo.
> I'm sure they have an absolute tsunami of reports to go through and I do not envy the humans tasked with this work.
I'm not sure this is an excuse any more, particularly for companies with huge AI investments.
Maybe you don't have AI making final decisions, but for egregious cases like what you describe, it should be well within Meta's capabilities to prioritize human enforcement for them using AI.
Yea it's total dogshit. I would even conspire that this is intentional to drive engagement. I certainly don't believe that one of the biggest corporations don't have the capabilities to recognize gore. Because a free version of chatgpt can do that without problem.
It's been years since I was regularly active on Facebook but I had many reports closed that way and then days~weeks later the account would be gone anyway. I suspect they batch up account closures to obfuscate their systems, like online games do with cheaters.
When you pick apart what's actually going on in Meta's revenue pipeline it's hideous. Think about this and compare it to what the world was like say 30 years ago:
* There are literally thousands of IG profiles that are essentially softcore porn which serves as a lead gen device for an OnlyFans account. Meta promotes these profiles to its users heavily because sex sells. Meta profits from the engagement with the profile, OnlyFans profits from signups sent to it by Meta.
* This is one of the primary ways OnlyFans has grown its pornography business to $8B a year
* Once users sign up for OnlyFans a common mode of engagement is that a managerial company lies and pretends to be the porn actress, and texts with the user under fraudulent pretense as the user consumes porn
Now... what was the world like 30 years ago?
* You couldn't buy porn mags without showing ID, Internet porn not really a thing for most people yet
* Even softcore stuff was mostly relegated to late night Cinemax
* Far fewer women had body image disorders and mental health disorders
* Far fewer young men had ED
This stuff is evil, when you connect the dots, it's crime, evil, lies and perversion all lined up to make a small number of companies a staggering amount of money. Somehow government and industry are OK with this, I guess this is the world the Epstein class built for us so no surprise. I am not a religious guy, and I would hardly call myself a prude, but this all exists and is widespread because it enables profit and fraud and exploitation, and I find that disgusting. Zuck's a porn baron. He knows what's going on. The fucker's on the take.
If anything should be in the dictionary next to the word evil, it's the 2026 state of affairs
Do you have some reference? The one (rather simple/incomplete) that I could find at : https://worldpopulationreview.com/country-rankings/erectile-... shows that overall ED dropped, maybe it is different for young men but would be curious to see an actual study.
If there was any increase of reported incidents of ED over the 30 years I would hazard to guess that it would have to do with the fact that various medications have been released over the last 30 years to address it. Fewer people will report an embarrassing issue when there is a narrow chance it can even be fixed.
I’m here before some pedantic person replies “correlation without causation.”
People repeat that phrase constantly forgetting that the lack of proof of correlation is not proof of no causation. It means it could go either way, not that it’s been debunked.
Oh sweetie, Meta's revenue pipeline has included knowingly playing a crucial supporting and fomenting role in a genocide in Myanmar, and continues to rely on a huge number of actual scam ads from China that are intentionally ignored to protect revenue. Besides of course the "developing algorithms that detect when teen girls are at their most vulnerable to manipulate them".
But you're right. Ellison and Thiel get all the attention, while Zuckerberg has caused magnitudes more societal destruction than both combined. Not because the former two are better people, far from it, just hard real-world impact from the companies they've founded.
In tech, nothing comes close to the damage of Meta. Not even the most despicable of companies like ClearView, as while their products might be worse on paper their actual impact pales in comparison.
This is really cool! Even before OpenClaw gained popularity, I've been exploring ways to create an iOS Shortcut so I could connect my Apple HomePod to a local agent framework. iOS Shortcuts + Apple HomePod just make it so difficult to handle multi-turn voice interactions, especially if you want to use better TTS and STT models than what's provided by Apple.
Most of my research was focused around the Home Assistant Wyoming project and their voice assistant but I hadn't pulled the trigger yet on buying their hardware as it's still very early stages. I hadn't heard of the PamirAI device yet. Thanks for bringing it to my attention!
How would you say it compares to something like Alexa, HomePod, or Google Home when it comes to wake word detection and overall audio quality (strictly for voice, not music)?
How well does the more "async" interaction work for this? One of the major limitations of using iOS Shortcuts is that they timeout after more than 30 seconds, so if your agent doesn't respond within 30 seconds the Shortcut will close and the response is never uttered. Does this stack just utter a response out loud even if it doesn't come for 3-4 minutes? Or does it indicate a response is queued?
Is anyone shocked that the founder of Trilogy Software, Joe Liemandt, would take the lessons learned in creating a "bossware" enterprise software stack (Crossover) and apply them to his latest venture?
I think there is enormous opportunity in combining Ed Tech and Generative AI, especially if you can create a highly tailored tutor available 24/7 for every student, especially for those in low-income situations that have historically been locked out from gaining such guidance. It's just unfortunate that this so rapidly morphed to spying on students for data harvesting.
The follow on effects if this becomes pervasive could be so incredibly damaging for students. Anxiety from performance metrics are already a very real thing because of standardized testing and scoring to get into the best schools. Also, imagine all of this data "follows" a student as they transition into the work force. We're headed towards a future where entry-level employees will have to disclose their "course work engagement" KPIs on their resume.
I think it does matter how much power it takes but, in the context of power to "benefits humanity" ratio. Things that significantly reduce human suffering or improve human life are probably worth exerting energy on.
However, if we frame the question this way, I would imagine there are many more low-hanging fruit before we question the utility of LLMs. For example, should some humans be dumping 5-10 kWh/day into things like hot tubs or pools? That's just the most absurd one I was able to come up with off the top of my head. I'm sure we could find many others.
It's a tough thought experiment to continue though. Ultimately, one could argue we shouldn't be spending any more energy than what is absolutely necessary to live. (food, minimal shelter, water, etc) Personally, I would not find that enjoyable way to live.
I find it difficult to believe software is Airbus’ competitive edge. First, their software for aircrew bidding is an absolute and utter disaster. Date filtering has been broken nearly a year despite multiple releases being pushed. Date management is like THE KEY functionality of aircrew bidding. I also use their flight plan software and it’s like they never bothered to ask a pilot how they use a flight plan in flight.
I think Airbus is riding the coat tails of solid engineering done in the 80s and continuing to iterate that platform vs Boeing trying to iterate on a hardware platform from the 60s. Airbus benefited significantly from 20s years of engineering and technological progress. Since the original design of the A320, changes have been incremental. Slightly different engines, addition of GPS/GNS, CPDLC, CRT to LCD screens. Meanwhile Boeing has attempted to take a steam gauge design from the 60s and retrofit decades of technology improvements and, critically, they attempted to add engines significantly altering the aerodynamics of the aircraft.
> Google DeepMind and GTIG have identified an increase in model extraction attempts or "distillation attacks," a method of intellectual property theft that violates Google's terms of service.
That’s rich considering the source of training data for these models.
Maybe that’s the outcome of the IP theft lawsuits currently in play. If you trained on stolen data, then anyone can distill your model.
At a minimum Siri, Alexa, and Google Home should at least have a path to plugin a tool like this. Instead I’m hacking together conversation loops in iOS Shortcuts to make something like this style of interaction with significantly worse UX.
reply