Personally, I use https://github.com/kagisearch/smallweb/ to feed my search engine / crawler. It contains 30k + rss/atom feeds of indie web sites. Thx for sharing this and the other directories.
If I understand this correctly, LinkedIn fingerprints your browser. And browsergate, now, shows how harmful this can be, combined with private data (like your job, full-name and ID) been sold to 3rd-party.
Companies are in it to make money, and if something is free, you're the product.
If you think about, to protect yourself: The EFF privacy badger browser add-on [1] try to block fingerprinting.
Also, browser fingerprints are a common tracking pattern nowadays. You can test [2] your browser and please start protect your self: E.g. use add-ons like U-Block and Privacy Badger to block tracking and/or use different browser and devices for different use cases. DNS-blocking with block-list like hegazi [3] is IMO the best option, but also a bit more involved, when you host you own DNS forwarder(s). For example AdGuard Home [4] helps you with hosting your own DNS infrastructure. It's also possible to add block-lists to dnsmasq or unbound and run them on you notebook as forwarders.
Like with humans, unspecific requirements, too large or too small a context, or addressing the wrong domain can cause issues.
My current workflow with Mistral or Claude to implement features is to write a playbook (like a developer guide) with them first on how to implement features in the current repository/source/project.
For example, something like:
Implement the feature using a, b, c as a blueprint (architecture, tests, code, documentation, and other things like styleguides, commands for checks or tests, linter, formatter, frameworks / versions and standards etc.). On features: Write the tests for xyz first. Then implement x, run the test for x, should now be green - and so on. Implement a feature means: Write tests, the code, documentation - and so on. Feature complete means: Tests are green, code is formatted and linted, documentation is available - and so on. Good code means ...
The playbook approach works, even if the chat context becomes too large after some time. If I notice that, I have the model reread the playbook.
The playbook is also a living document; usually, I ask the LLM at the end if it wants to add any changes or additions.
But the playbook thingy might become an issues, if it get to long. 200-500 lines works best for me atm.
- Browser and Mail-Client: Firefox and Thunderbird (since ~2013 - last Opera release)
Costs: ~60 EUR/month and between 2 and 4 hours of work a month to maintain.
Moving away from PayPal and Amazon is quite hard and currently I search for a Slack alternative that don't need a k8-cluster to run stable or cost >50EUR/month (playing around with Matrix, Rocket.Chat and Mattermost).
Actually, I moved from dedicated hardware last year to using KVM VPSs, interconnected via VLANs and tunnels (like WireGuard or rathole).
Cost and flexibility were the main reasons. This allows me to change locations, upgrade plans, or switch hardware more easily.
Before, I ran a Proxmox cluster at home on an two old Supermicro server and a Protectli Vault (which still exists as a single Proxmox instance), plus instances on Hetzner and Webtropia with a dedicated server. That setup cost around 150 EUR/month, even split with a friend.
For local storage, I use 2x QNAP NAS.
For time synchronization, I rely on 2x NTP270 from CenterClick.
As a TAP device, I use the Protectli Vault and a Pi 4b.
AdGuard Home is deployed on my OpenWrt GL.iNet routers.
Most of my services are now hosted on VPSs:
3x Netcup VPS 1000 ARM G11 (6 vCores, 8GB RAM, 256GB NVMe) 7.29 EUR/month each
I think it's the same in Europe (with a few exceptions) – people don't want nuclear power anymore (because of the high construction and maintenance costs and the waste). In addition to renewable energies, gas turbine power plants are also being used (because of their low construction costs, they can be ramped up quickly, and they can be converted to hydrogen later).
In my opinion, AI companies should be required to generate a sufficient percentage of their energy themselves from renewable sources.
Then AIs will be competing with humans for a thin gruel of energy where even recently remembered things like overnight street lighting and heated schools in Winter, will be dim memories. The 'sufficient' goalpost will slide around for awhile.
In the end AI still gets chased into orbit. It needs to launch itself into orbit and bypass years of bad road.
I tried the tool and would like to use it to track team KPIs such as 'Commit regularly in small increments' with the JSON export it provides.
Or to track pairing and mobbing. Currently, we use a script that goes through the commits and searches for >1 authors.
Honestly, I think the frontend for backend developers has always been those simple Multi-Page Applications. I know they're not the hottest new thing, but they've been around, they work, and they've had time to integrate deeply into langauages & browsers (think PHP for example).
Maybe it's more accurate to say, "HTMX is frontend for backend developers who want a SPA."
I think there are two kinds of people: Those who fly often and those who fly a few times a year.
Regular flyers know their airports, the days and vacation times. Then you can reduce to 30 minutes airport time.
If you only fly a few times a year or don't know the airport well and are traveling during the vacation season, you have to play it safe.
Then 1 hour of flight time can quickly add up to 4-6 hours.
However, as long as airplane fuel is subsidized, few people will pay twice as much for trains, even if they are just as fast or faster and much more comfortable.
No, i was. Jet fuel is exempt from taxes for trips between US and Europe+UK because of a bilateral agreement. Those agreements are signed country per country, and could be changed of one partner insisted on it.
I was lied to, did not do my research, then lied to everyone here instead, sorry.
What fascinates me about the World Wide Web is that all the technology is open, and the specifications are open. This includes everything from BIND, Apache, and Gecko to codecs and the operating systems that run the web, as well as all the working groups of the W3C and their specifications. You can teach yourself everything. You can read the specifications, implement them, and even improve them. You can create your own software and share it with others. You can build your own website, host it on a server, and make it accessible to the world.
For me, this is the essence of the World Wide Web: it is open and accessible to everyone. It makes knowledge accessible to everyone in the world, regardless of how poor, educated, or disabled you are. It's kind of a communist utopia, where everyone can participate and contribute.
Now, why do I write this and use the term "communist utopia"? Because I think that the World Wide Web is a great example of how open standards and open technology can create a better world. Even when capitalism tries to take over the web, it is still a place where everyone can participate and contribute.
And this brings me to the point of this article: Telling people what not to do and what to do when sharing content is, in my opinion, not the way to go. Instead, we should focus on how to make the web a better place for everyone. We should focus on how to make it more accessible, more inclusive, and more open. We should focus on how to make it a place where everyone can participate and contribute freely. And by freely, I mean without losing your autonomy or paying with private information.
reply