The maker movement is not dead but it's a far more niche audience. Don't get me wrong, get a 3d printer and an arduino(or arduino like equivalent), endure a week of suffering and you are hooked for life: this was my own experience and anyone that I know that has ever gone down that road. ~~vibe~~ Slop coding won't die either but there are a lot of people will get a cold shower sooner or later: some already have. All ai slop is a russian roulette where the players may not even know they are playing and the gun is a backwards revolver. I can't say whether slop coding will professionally die before or after the burst of the AI bubble, but everyone is starting to realize that slop is unmaintainable, inefficient and full of bugs when you factor in all the edge cases no slop machine will ever cover. AI can exist in non-professional spaces and hobby projects, though I'd argue it may be equally as dangerous for the people that use it and those around them: you are only one firewall-cmd away from leaking all your personal data.
As for the parallels with the maker movements, here's one example: drones are one of my hobbies. I love drones and I've built countless fpv ones. For anyone that hasn't done that, the main thing to know is that no two self-build drones are the same - custom 3d printed parts, tweaks, tons of fiddling about. The main difference is that while I am self-taught when it comes to drones, I have some decent knowledge in physics, I understand the implications of building a drone and what could go wrong: you won't see me flying any of my drones in the city - you may find me in some remote, secluded area, sure. The point is I am taking precautions to make sure that when I eventually crash my drone(not IF but WHEN), it will be in a tree 10km from anything that breathes. Slop code is something you live with and there are infinite ways to f-up. And way too many people are living in denial.
I've received several similar ones over the years. At this point, if I get an email from someone I don't know and it contains a link, chances are it's spam. I genuinely doubt github(or any other company for that matter) would do something about it. While I fully support GDPR, the truth is, few people are willing to take action knowing how much bureaucracy would be involved...
For some (and other not-so) obvious reasons I switched to Graphene a few weeks ago. For years I've been pushing towards de-cloudifying my digital life and there were several reasons for it: On one hand it was the constant content subscription which gave me 0 guarantees that what I am interested in will still be available the next morning, even though I've paid for it, and the other was, you guessed it, the idiotic LLMs everywhere and subsequently the complete annihilation of security practices by giving a probabilistic model unrestricted access to all of your data.
First things, first, kudos to the GrapheneOS team for making it this easy to install and the surprisingly rapid support for new devices. Sure, there are features which I otherwise liked in the stock android that came with Pixel phones(swipe typing is something I very much enjoyed) but all in all, I can't say I miss much from it otherwise. I've slimmed down my list of apps to basic functionalities backed by self-hosted services (nextcloud, immich, jellifin, etc. along with a VPN I maintain myself) and I honestly don't miss much from the stock Android.
I want to point out that for a very long time I worked for a company that developed games for mobile devices and while the data we collected was mostly anonymous(*unless you logged in with facebook and by implications we had your facebook id) and it was never even utilized all that much beyond bad attempts at maximizing sales(not effectively anyway cause the people in charge were as incompetent as they could get), I can say that we collected ungodly amounts of data: most of the cloud bills were storage for that specific reason. While we did not have bad intentions and had to operate under strict GDPR regulations, this was a large company that was constantly monitored. Small companies can fly under the radar and get away with not abiding by the rules and laws and commonly they are not even aware what the repercussions could be. Similarly, the US and Asia-based giants can simply shrug it off and toss a few billions in fines. Make no mistake, no company is looking for your best interest and with that in mind, I couldn't recommend GrapheneOS (and self-hosting everything) enough, assuming you know what you are doing.
You can use a different keyboard than the default AOSP keyboard with more modern features including but not limited to swipe typing. We plan to replace AOSP keyboard with a fork of a more modern app but there isn't yet one meeting the functionality requirements which is under a license we can use. FlorisBoard is what we have plans to eventually use, although it might not be what we end up using.
We all saw that coming. For quite some time they have been all but transparent or open, vigorously removing even mild criticism towards any decisions they were making from github with no further explanation, locking comments, etc. No one that's been following the development and has been somewhat reliant on min.io is surprised. Personally the moment I saw the "maintenance" mode, I rushed to switch to garage. I have a few features I need to pack in a PR ready but I haven't had time to get to that. I should probably prioritize that.
Why should these guys bother with people who won't pay for their offering ? The community is not skilled enough to contribute to this type of project. Honestly most serious open source is industry backed and solves very challenging distributed systems problems. A run of the mill web dev doesnt know these things I am sorry to say.
No one should be immune from criticism. If you make a well established open source project WITH the help of thousands of volunteers around the world only to lock it up and say "pay up", that's called extortion.
Honestly? That claim seems a bit(read A LOT) exaggerated. I haven't had whatsapp in a decade and none of my friends(scattered all over Europe) or family uses it. Viber used to be a big deal and to an extent still is in some areas of Europe. Personally I think I've talked almost everyone into migrating to Signal.
In the Netherlands Signal is getting traction. I talk to most people via Signal, about 85% of my messages are via Signal. Which includes my parents, and I didn't even put them on Signal.
As many others pointed out, the released files are nearly nothing compared to the full dataset. Personally I've been fiddling a lot with OSINT and analytics over the publicly available Reddit data(a considerable amount of my spare time over the last year) and the one thing I can say is that LLMs are under-performing(huge understatement) - they are borderline useless compared to traditional ML techniques. But as far as LLMs go, the best performers are the open source uncensored models(the most uncensored and unhinged), while the worst performers are the proprietary and paid models, especially over the last 2-3 months: they have been nerfed into oblivion - to the extent where simple prompts like "who is eligible to vote in US presidential elections" is considered a controversial question. So in the unlikely event that the full files are released, I personally would look at the traditional NLP techniques long before investing any time into LLMs.
On the limited dataset: Completely agree - the public files are a fraction of what exists and I should have mentioned that it is not all files but all publicly available ones. But that's exactly why making even this subset searchable matters. The bar right now is people manually ctrl+F-ing through PDFs or relying on secondhand claims. This at least lets anyone verify what is public.
On LLMs vs traditional NLP: I hear you, and I've seen similar issues with LLM hallucination on structured data. That's why the architecture here is hybrid:
- Traditional exact regex/grep search for names, dates, identifiers
- Vector search for semantic queries
- LLM orchestration layer that must cite sources and can't generate answers without grounding
"can't" seems like quite a strong claim. Would you care to elaborate?
I can see how one might use a JSON schema that enforces source references in the output, but there is no technique I'm aware of to constrain a model to only come up with data based on the grounding docs, vs. making up a response based on pretrained data (or hallucinating one) and still listing the provided RAG results as attached reference.
It feels like your "can't" would be tantamount to having single-handedly solved the problem of hallucinations, which if you did, would be a billion-dollar-plus unlock for you, so I'm unsure you should show that level of certainty.
Its true. We have basically
moved off the platforms for agentic security and host our own models now... OpenAI was still the fastest, cheapest, working platform for it up until middle of last year. Hey OpenAI, thank us later for blasting your platform with threat actor data and behavior for several years! :P
I understand uncensored in the context of LLMs, what is unhinged? Fine tuning specifically to increase likelihood of entering controversial topics without specific prompting?
Saying grok is uncensored is like saying that deepseek is uncensored. If anything deepseek is probably less censored than grok. The doplin family has given me the best results, though mostly in niche cases.
This particular one: I suspect openAI uses different models in different regions so I do get an answer but I also want to point out that I am not paying a cent so I can only test those out on the free ones. For the first time ever, I can honestly say that I am glad I don't live in the US but a friend who does sent me a few of his latest encounters and that particular question yielded something along the lines of "I am not allowed to discuss such controversial topics, bla, bla, bla, you can easily look it up online". If that is the case, I suspect people will soon start flooding VPN providers and companies such as OpenAI will roll that out worldwide. Time will tell I guess.
I've had mixed feelings about HN in terms of how people perceive a product over the last few years. I used to have an overwhelmingly positive opinions about the community up until COVID: everyone started making the "solution that will help researchers find a cure", which, in all instances, ended up being tons of people independently loading up papers into elasticsearch. And when I pointed out that they are all solving a problem no one has, I got jumped by a ton of people going "nooooo you just don't understand how powerful what these systems are". At the end, none of those turned out to be the silver bullet, or a bullet for that matter.
Recently it's the AI craze: you have a complex problem to solve: "AI can easily do that". You have infrastructure issues: "AI can easily do that". You have issues processing petabytes of data fast and efficiently: "AI can easily solve that". I am getting a ton of bots trying to access my home network: "AI can easily solve that". I am having a hard time falling asleep: "AI". I have a flu: "AI".
In a nutshell, the shiny new toy syndrome is very common so the reception of a product is not a guarantee for success. To give you an example: recently some people(pretty active on here) got in touch with me with regards to an initiative I am a part of: they claimed that they wanted some expertise on the subject I agreed to schedule a call. It turned out to be a sales pitch for yet another product which tries to solve a problem but it does not because the people who built it fundamentally do not understand the problem. Forget the fact that I am not interested in being their client, given that it's a volunteer project and none of the people involved are paid to do it(if anything, we are paying from our own pockets to keep it alive), it was yet another techbro product which tries to build a skyscraper starting from the roof. Except the ground underneath is partially lava, partially a swamp.
I think it is all related to the impostor syndrome: young people have it, they get a bit older and gain confidence. By the time people hit their early to mid 30s, they start realizing that most of the world operates on patches over patches and 2 layers down, no one has a clue what is going on.
Everything that is currently going on is a result of people who are convinced they know what they are doing. Spoilers: they don't. The sooner the AI bubble bursts, the better.
The amount of hate I've received here for similar statements is astonishing. What is even more astonishing is that it takes 3-rd grade math skills to work out that the current AI(even ignoring the fact that there is nothing intelligent about the current AI) costs are astronomical and they do not deliver on the promises and everyone is operating at wild loses. At the moment we are at "if you owe 100k to your bank, you have a problem but if you owe 100M to your bank, your bank has a problem". It's the exact same bullshitter economy that people like musk have been exploiting for decades: promise a ton, never deliver, make a secondary promise for "next year", rinse and repeat -> infinite profit. Especially when you rope in fanatical followers.
I don't want to defend musk in any way but I think you are making a mistake there using him as an example because what boosted him quite a lot is that he actually delivered what he claimed. Always late but still earlier than anybody was guesstimating. And now he is completely spiraling but its a lot harder to lose a billion than to gain one so he persists and even gets richer. Plus his "fanatical" followers are poor. It just doesn't match the situation.
Sounds a lot like "I'm not racist but". There's a website dedicated to all of his bs https://elonmusk.today
He is the definition of a cult. Collects money from fanatical followers who will praise every word he says, never delivers, "oh next year guys, for sure, wanna buy a not a flamethrower, while you are at it?". Not to mention what once were laughable conspiracy theories about him turned out to be true(such that even I laughed when I heard them). Torvalds is right with his statement about musk: "incompetent" and "too stupid to work at a tech company".
How is any of that different from AI evangelists, be it regular hype kids or CEOs? "All code will be written by AI by the end of {current_year+1}". "We know how to build AGI by the end of {current_year+1}". "AI will discover new sciences". A quick search will draw a billion claims from everyone involved. Much like on here, where I'm constantly told that LLMs are a silver bullet and the only reason why they aren't working for me is because my prompts are not explicit enough or I'm not paying for a subscription. All while watching people submit garbage LLM code and breaking their computers by copy-pasting idiotic suggestions from chatgpt into their terminals. It is depressing how close we are to the Idiocracy world without anyone noticing. And it did not take 500 years but just 3. Everyone involved - altman, zukerberg, musk, pichai, nadella, huang, etc. are well aware they are building a bullshit economy on top of bullshit claims and false promises.
Fusion 360: the price I'm paying for effectively hobby projects(and ones I occasionally publish for free if I feel someone would benefit from them) is absurd. No other cad software comes even close to being as easy and as flexible though so I'm accepting it for the time being.
Can you not use the "free for hobbyists" license? Autodesk make it unreasonably hard to renew it, instead dark-patterning you into upgrading to the paid tier. (Unless of course you need paid tier features)
I agree on the easy to use front though. I'm trying to move to freecad but it hasn't had its blender moment yet.
That's what I used for a long time but it was way too restrictive: like I can have 10 saved designs at a time with their entire timeline in case I wanted to go back or modify/improve anything. And converting an STL back into a solid doesn't give the best results and adds a lot of overhead. So until something better pops up, I have no other choice but to pay for it.
As for the parallels with the maker movements, here's one example: drones are one of my hobbies. I love drones and I've built countless fpv ones. For anyone that hasn't done that, the main thing to know is that no two self-build drones are the same - custom 3d printed parts, tweaks, tons of fiddling about. The main difference is that while I am self-taught when it comes to drones, I have some decent knowledge in physics, I understand the implications of building a drone and what could go wrong: you won't see me flying any of my drones in the city - you may find me in some remote, secluded area, sure. The point is I am taking precautions to make sure that when I eventually crash my drone(not IF but WHEN), it will be in a tree 10km from anything that breathes. Slop code is something you live with and there are infinite ways to f-up. And way too many people are living in denial.
reply