(1) Are there any plans to make this compatible with the ducklake specification? Meaning: Instead of using Iceberg in the background, you would use ducklake with its SQL tables? My knowledge is very limited but to me, besides leveraging duckdb, another big point of ducklake is that it's using SQL for the catalog stuff instead of a confusing mixture of files, thereby offering a bunch of advantages like not having to care about number of snapshots and better concurrent writes.
(2) Might it be possible that pg_duckdb will achieve the same thing in some time or do things not work like that?
(1) We've thought about it, no current plans. We'd ideally reimplement DuckLake in Postgres directly such that we can preserve Postgres transaction boundaries, rather than reuse the Ducklake implementation that would run in a separate process. The double-edged sword is that there's a bunch of complexity around things like inlined data and passing the inlined data into DuckDB at query time, though if we can do that then you can get pretty high transaction performance.
(2) In principle, it's a bit easier for pg_duckdb to reuse the existing Ducklake implementation because DuckDB sits in every Postgres process and they can call into each other, but we feel that architecture is less appropriate in terms resource management and stability.
Agreed. One of the researchers quoted has a good take on this:
> “There are definitely groups out there who would like to push the responsibility of reducing carbon emissions away from corporations and onto individuals, which is problematic,” said co-author Dr Ramit Debnath, Assistant Professor and Cambridge Zero Fellow at the University of Cambridge. “However, personal carbon footprints can illustrate the profound inequality within and between countries and help people identify how to live in a more climate-friendly way.”
because most damage is institutional so most of the work has to be done by institutions (states) and corporations. you can track the carbon footprint of rich people all you want, this won't change a thing until states and corporations are forced or voluntarily do what is necessary. It is like trying to do cost optimisation in your company by slashing the department with the lowest budget while leaving the ones with big fat budgets almost untouched. Doesn't make sense, will not work. Personal responsibilisation will only make sense in a framework where states and corporations first do their fair share (see yellow vest in France).
Is it just me or do others find the Electron bashing a little over the top as well?
I mean VS Code, Discord, Slack and Obsidian are all in very widespread use and work perfectly fine for me.
Are there alternatives to Electron that require less resources? Tauri seems to be proof that there are.
But I think there is real value in large communities and backing. Electron seems to work perfectly fine to me for everyday use as is evident by the applications mentioned above. I would personally choose Electron over Tauri even for greenfield projects, simply because Electron seems to power applications that are in much more widespread use.
While I appreciate VSCode, I think Discord, Slack and Obsidian are not selling the platform at all. They feel sluggish and have poor ergonomics. Now the second one may or may not be attributed to Electron depending of if you buy the cultural thinggy, but the first one clearly is. Mumble is super reactive in comparison to them, and most note softwares dance around Obsidian.
Having a latency on local clicks and transitions is not my idea of fun.
VSCode is an outlier here. And it's getting slower and slower with age, while sublime text is getting faster.
These things are all very subjective. Not everyone is affected the same way by latency. I hear people complaining about it all the time on thw internet, but in practice I never notice it and I don't know anyone that does either. Ergonomics is the same, I think discord is really nice to use and all the alternatives I've used have been worse.
To be clear, your opinion is absolutely valid I just don't think they apply equally to everyone.
This is true for me, as well. Whenever I start a 50 MHz computer and something is instantaneous instead of displaying a "Please wait…", or I start a modern computer with the Haiku operating system instead of Windows or Linux, I'm reminded just how responsive things can be.
On Android, I stumbled upon a file explorer, called Little File Explorer, that feels like this. It's 170 KiB, and generally opens directories without a feeling of transition. Instead of feeling like it's laboriously building a view, it feels almost like the view's already built. Alas, it doesn't yet remember scroll position when paging back.
Modern things can make stuff that would be slow, fast; but they also usually make stuff that can be fast, slow. I believe this normalisation of slowness is a "Normalisation of Deviance".
Discord is not working "perfectly fine" on desktop for many people. Usually, the browser version is faster, needs less resources, has fewer platform integration problems (mic/camera access etc.), and crashes less often. And has less critical security vulnerabilities, as a browser gets patched much faster than Electron in general, and Discord in particular (who tend to be on an older branch, due to some native C++ libraries, that as far as I can tell have no user-visible impact).
And VSCode has to be contrasted with Atom, which is the same but in so much worse: It takes a lot of effort to make Electron work well.
I maintain an open-source app built with Electron. It serves tens of thousands of users every months and nobody complained in 5 years that it is being built with Electron. Not saying that Electron is perfect, and that it couldn't be a bit more performant, but as a solo maintainer (and entrepreneur) it helps me ship something that save people time. The burden of maintaining an application is already huge. Having to juggle with multiple environments would be a hassle and I definitely wouldn't do it.
That being said, if a "drop-in" alternative would be available I would probably try to switch at one point. But the alternative would have to be on par with the ecosystem (including packaging, binaries signing, etc.), the community, the ease of use... I don't think there is such a thing yet.
Slack is currently eating 600MB of my RAM, for something I check maybe once an hour. You know what I'd like to use my RAM for instead ? Gradle. kotlinc. intellij. Things that I actually use to do my work, and not a bad IRC that wants to make me pay for the privilege of seeing old messages. Electron is a demonstration of laziness and a living proof that software companies do not give a single shit about their users and just want to push more crap, for cheaper, all the time.
> a living proof that software companies do not give a single shit about their users and just want to push more crap, for cheaper, all the time.
Exact opposite, to me Electron is the living proof that software companies correctly care a lot about building a product people want, and correctly realize that the large majority of people correctly do not care that one of their top productivity app uses $1 worth of RAM, but want the app to have the features and UX they need instead.
The irrational obsession of part of the Hacker News crowd for the RAM usage of web apps is borderline psychotic. Man, take a chill pill, go get more RAM for a couple bucks once every 3 years, and let the engineers focus on UX and features ok? I don't want my productivity app to be a codegolf exercise
Overprivileged HN-commenter believes that upgrading RAM is something that every user can afford, wants to afford (oh, I'm sorry, let me blow 50 bucks for your pretty eyes because you couldn't be arsed to not blast a whole javascript runtime on your app that you're forcing me to use to upgrade my drivers or launch a game), ignores physical limitations (unupgradable laptops, already maxed out configs from back in the day).
Today's software runs worse on modern hardware than yesterday's does, because you "let the engineers focus on UX and features" without teaching them to care for a single moment about _actual_ user experience instead of whatever bullshit their product owner put out.
Talk to any real engineer and not the US bullcrap of "oh sure everyone out of a code camp is an engineer" and ask them if quadrupling the weight of a bridge and multiplying resource consumption by ten is an option. You are making our profession look like fucking clowns.
> most people really don't care about 600 MB of RAM
There’s a difference between not caring that your computer is sluggish because of its programs, and not being able to tell the difference and demand less sluggish alternatives.
Normies put up with poor user experiences to the exact degree that software engineers deliver them.
That my UI should be significantly slower than my screen’s framerate is a false premise.
Go ask your parents if they're not pissed off that the laptop they bought for cheap is now struggling to run Peggle Deluxe and they don't understand why, while there's 7 Electron apps running in the background eating all that memory.
People don't care about 600MB. They care about how using their PC feels, and Electron is a massive contributor to it feeling like shit.
Our whole industry is a big joke technically. On the business side, though, we managed to get people used to downloading huge apps, that run slow, and are full of bugs. That's admittedly a big business achievement, but not the kind of world I want to live in.
Just like people learn to completely ignore popups (and are often not even aware that a popup appears, even an important one), they learn to accept bugs, overall bad apps, and the fact that they need a new smartphone every two years to keep up.
Doesn't mean at all they wouldn't enjoy better technology. They just do not have a choice.
>the large majority of people correctly do not care that one of their top productivity app uses $1 worth of RAM, but want the app to have the features and UX they need instead.
The large majority of people care about programs that run fast and feel snappy, programs that don't look like some high school dropout's modern art project, programs that don't demand a pocket supercomputer to bankrupt you, programs that just get shit done because computers are just a god damn appliance comparable to a toaster from Walmart.
Apple charges $200 for 8GB additional RAM. So the cost of 600MB of Apple RAM in $15. That's the cost of 1 month of a subscription to Slack. You're likely to use your laptop for ~4years = 50months, so the RAM cost of Slack is around 2% of overall Slack cost, even on Apple hardware.
Users pay 2% more to get more RAM, Slack developers have 200% productivity thanks to not having to deal with low-level optimization crap, and can focus on building features and UX.
That's how the world works. But some angry HNers can't wrap their heads around it
The thing is that slack developers are not the only ones that think that way.
So now people lose some percentage to their mail client, their chat client, their ide,...
A bit more ram for developer comfort, a bit more processing power, a bit more to download,...
Additionally at the end of the day some people do interact with non developers and realize they tend to have less beefy devices.
Being the tech support in the family can be interesting because of that front. For example when I'm asked what's causing my fathers or grandparents relatively new laptop to be so laggy despite how much better it is than the predecessor and finding out there's no happy answer for that. There's no explaining that some software regressed on this front and they should just accept more electronic waste.
No, the cost of 600MB of RAM is $200. You just end up having bonus to waste on more electron apps.
This whole "this is how the world works" bullshit is post-hoc rationalisation done by wealthy idiots trying to justify their purchase by saying "it's just 5 cents a day if you consider it'll last ten years". This is bullshit and only applies if the purchase you're making is insignificant to you. So, congrats, $200 is insignificant to you. It's not, for most of the world, but you'd see that if you pulled your head out your ass.
This isn't even "not having to deal with low level optimization". Using a normal toolkit is not low level optimization, it's the bare minimum.
Putain, elle est belle la French Tech avec des gens comme ça.
Look, I'm not even stating an opinion here, I'm just stating facts.
Clearly, companies and developers are behaving in a way you don't like. You seem to believe that billion dollar companies are just stupid, that all developers are idiots.
I'm explaining to you the correct way to explain this behavior, an explanation that does not assume that a majority of successful companies and people are stupid, but that instead relies on rational arguments.
Of course I'd love it if there was a perfect way to develop software once without micromanaging memory, run it on web, desktop, mobile and have it be super performant at the same time. It just does not exist as of today, companies and people take rational decisions based on this. It is how it is, stop raging, accept it or build something better.
>Look, I'm not even stating an opinion here, I'm just stating facts.
No, you're stating an opinion and trying to pass it off as facts.
>You seem to believe that billion dollar companies are just stupid, that all developers are idiots.
If you work with developers, you know we're all idiots at heart. And lol absolutely yes billion dollar companies are stupid. You think that scale prevents stupidity ? No, if anything it scales it up even harder. A company that was doing stupid things as a self funded startup with 5k in the bank and a multinational megacorporation with 50 billion in the bank both have equal opportunities to be stupid. The billion dollar corporation even gets to use it's sheer scale to walk over their stupidity and not die from it.
> stop raging,
The day I'll stop complaining will be a miserable one. Stopping the rage is what lead to absolutely dreadful state of software development today.
>an explanation that does not assume that a majority of successful companies and people are stupid, but that instead relies on rational arguments.
One does not preclude the other, the rational argument is still a moronic one.
>Of course I'd love it if there was a perfect way to develop software once without micromanaging memory, run it on web, desktop, mobile and have it be super performant at the same time.
Have you _never_ looked at any UI toolkit in existence, ever ? Even Flutter is a better option than Electron if all you want is a multiplatform app, and this is coming from someone not necessarily holding Flutter in his heart. The bare minimum to do if you're going to make your life easier is to not pick the absolute worst option that is Electron. There's a scale to multiplatform toolkits, and Electron is at the bottom end of this, being neither a good technical choice, a good product choice, a good choice for your users. Electron benefits a single group: developers, but mostly company management, happy to crap out software.
> Have you _never_ looked at any UI toolkit in existence, ever ?
Funny that you say that, I actually did my PhD on this topic! You can find a state of the art of UI toolkits, from page 28 to page 96 of my PhD manuscript. https://hal.science/tel-01455466 I wrote it in 2015, so it's dated for sure. But short answer is: Yes I have been looking at A LOT of UI Toolkits, and one thing I am sure of is that you clearly are blissfully unaware of the socio-technical aspects of Toolkit adoption.
Even if Flutter existed, it would absolutely not have been an obviously smart move by Slack to DOUBLE their engineering workload by picking it instead of Electron while still having to develop a web based version. For what result? The app would most likely not be noticeably faster, if anything, just because the engineering effort that went into optimizing Slack performance would have been dilapidated in re-implementing and maintaining Slack in another toolkit.
As an industry we trade runtime efficiency for lowering the bar on developers; we've been doing that for 40 years now. The industry has been growing so hard that that is a unavoidable and at a macro level probably a net positive.
You seem to believe that companies do what's best for their users. Not sure in which world you live: companies do what seems most profitable for them (and pray that it is).
Can we at least agree on the fact that it would not be that difficult (and costly) for Slack to actually expose an API allowing for third-party clients? That way I could have a lightweight CLI client and I would be fine with most users staying on their crappy Electron client.
I guess they don't do it for a reason, which is probably not user experience. Maybe having only official apps is (or seems, again) better for lock-in?
1-An external API is a product, you have to maintain it, keep it backward compatible even when you change your internal models, monitor it, protect it against attacks, etc. So no, I do not agree that it would be easy for Slack to expose an external API for 3rd party clients. It would be at least millions worth of investment.
2-A lightweight CLI client for Slack? Let me tell you, this would have a very very tiny user base. Probably just you, and even you would be bored of it after 1week. Would it be worth it for Slack to invest millions in an external API just so a couple hundred geeks can make their own crappy client?
3-Analytics. Slack runs analytics on usage of their app, in order to know what users use and want. Can't do that if you don't own the frontend.
4-Brand. If one of your main competitive advantages is a good UX (And believe me, it is the case for Slack), would you want to grant people the right to create crappy apps that ruins the UX and turn people off your product? This is what is killing Android brand value for example. Sure it's open, but it means there are a lot of Crappy UIs that turn people off.
The beauty of liberal capitalism is that ultimately, at least to some extent, what is good for users is good for the company, so incentives are aligned to some extent, and very unlikely to be completely opposed as you seem to suggest. So yes, I believe that companies are taking strategic decisions (such as not shipping an external API and 5 different native clients) in large part because it does indeed benefit the majority of their users.
1- I would hope that they already maintain and protect their private API against attacks (it has to be exposed to the internet, right?), and that they keep backward compatibility for the ElectronJS (and isn't the Android app native?) that are not updated out there. You forgot to mention something like "they would have to invent access control", to which I would answer that they already need that. They would literally just have to open their API to third parties. I doubt this would cost millions.
2- Don't assume too much. I use IRC from a CLI. But anyway you are just repeating your previous point, which is that you think it would cost millions.
3- Well, they would know what the third-party apps use in the API. They could also provide integrations for most popular languages, and those would send telemetry (what do you think the Google Play Services do?). For my crappy app, probably they don't need to know what I do, I am just a useless geek as you said.
4- Counter example: I was always able to connect my crappy app to my GMail account, and it did not prevent GMail from essentially taking over e-mail. But the couple hundred geeks who don't like the web frontend can use their crappy e-mail app, and everyone is happy.
> The beauty of liberal capitalism is that ultimately, at least to some extent, what is good for users is good for the company.
Respectfully, that is the most naive comment I have read today. I don't even know where to start answering that, so I'll just pass :-).
The code of the web engine can be shared among several apps, instead of each of them having its own copy in RAM, making it less of an issue when having several of them.
Only if they make use of the same version of electron. Also apps tend to statically link their libs nowadays to reduce the amount of runtime dependencies from what I've seen. And I'm pretty sure apps distributed through flatpak and the like come with their own copy of the corresponding .so and won't share anything either - unless I'm wrong and electron does get shipped in a shared runtime.
The .so is shared. All those uncompressed cat gifs, no.
I'm not familiar with internal browser architecture, but do they make at least a token effort to not render/run attached javascript for elements that are not currently in view?
I haven't measured, but my gut feeling is Electron apps go extremely crappy when you have like 30 memes in a row in a chat channel and make the mistake of switching to it.
Edit: hey, what happens when you open a 500 M log file in vscode?
Idk. I have electron programs running which take up much less residual memory. I do have slack running in my browser thou and the whole browser running slack and outlock360 and a bunch of other tabs uses 600MB of residual memory of which ... 324MB is used by slack ... so I would say it's more of an slack then a electron problem. AFIK the overhead of electron is around 100MB per-program this is still bad, sure, but far less worse then what you describe.
I feel so too. There are currently no valid alternatives I know to electron. It is sad, but this is the current state of UI frameworks, if you are targeting cross-platform with identical look and feel.
Also electron comes with its own advantages which many on HN seem to forget.
code-server for instance was very easy todo, because vscode was build using electron. It runs virtually anywhere. Uses fewer resources then most WMs on a headless device if you need a full blown IDE.
Is it just me or do prices for a lot of UI/design resources seem high compared to prices of resources on other parts of IT? 100 $ for a 218 page PDF seem quite steep to me.
I use my own not-yet-ready-for-release app called Noteworthy [1], but here is a list of some of the roamlikes I find most inspiring:
> Athens Research -- free and open source roam competitor made by someone who failed an interview for a job at roam :) -- https://github.com/athensresearch/athens
> Obsidian -- free but non-open wikilink system based on Markdown files -- https://obsidian.md/
Could someone explain to me what the difference between a computer algebra system like Symbolics.jl and a theorem prover like Coq is?
Is that more in the nuances or is there a fundamental difference between these two (referring to the terms and not their specific implementations in Symbolics.jl and Coq respectively)?
Or is this question unreasonable to ask in the first place?
Computer algebra systems are usually large, heuristic systems for doing algebraic manipulation of symbolic expressions by computer. Roughly, they’re there to aid a human in doing mechanical algebra by working with symbols and not just numbers. Generally, the results coming out of a CAS are not considered to be “proved correct”, and ought to be verified by the programmer/user.
Proof assistants aim to allow one to write down a mathematics assertion in a precise manner, and to help the user write a formally verifiable proof for that theorem.
Extremely crudely, a CAS is like a super-powered calculator, while a proof assistant is like a super-powered unit test framework.
A rough way to think about a tool like Coq or Isabelle is that it provides a very simple core foundation built out of some mathematical logic and mechanisms for manipulating statements in that logic ("if P implies Q, then X"). Proofs end up being sequences of applications of rules that manipulate the statements until you reach some conclusion. People build up theories on top of this core (and other theories), which introduce new mathematical constructs and theorems/lemmas that represent their properties.
A computer algebra system (CAS) tends to have a more complex core because instead of only having some logic as its basis, it might know about higher level constructs like polynomials and such. This allows it to more easily operate on those constructs directly via specialized algorithms and heuristics without having to build up a huge foundation of theories that they live on top of.
That said, you could implement lots of things a CAS does in a theorem prover - it just would probably be pretty awkward to work with, and possibly quite slow. Similarly, lots of CAS tools (like Mathematica and Maple) provide features for doing proofs that are similar to theorem provers. One place I would expect those to differ though is that the small, simple core of a theorem prover allows people to careful verify it such that the theories atop it inherit the verification evidence of the core. I do not know if any such verification evidence exists when it comes to the large kernels that make up tools like Mathematica/Maple.
Hi not sure if I am just completely off here but I am wondering how this relates or compares to processing things with Kafka and Kafka Streams?
If I am reading things correctly with Kafka the workflow equivalent to what's written in the article would be to have your producer produce via hash-based-round-robin (the default partitioning algorithm) based on the key you are interested in into some topic and then your consumer would just read it and your data would already be sorted for the given keys (because within a partition Kafka has sorting guarantees) and also be co-partitioned correctly if you need to read some other topic in with the same number of partitions and the same logical keys produced via the same algorithm. No?
This is the most basic pattern for distributed joins - you hash on the join key in both tables and shuffle data based on hash ranges. In some systems like Redshift you can designate the key for distribution so that "related" records are already co-located on a single shard.
> our data would already be sorted for the given keys (because within a partition Kafka has sorting guarantees)
It's been a while since I used Kafka but I don't remember "sorting guarantees". Consumers see events "in order" based on when they were produced, because each partition is a queue.
Yes I guess my point is when using Kafka in combination with Kafka Streams and you produce things partitioned in a way that you need them for consumption then you do not need to do any shuffling in the instance where you want to join because data is already partitioned correctly.
You seem to know what you're talking about. Any recommendations on learning resources for this type of flow? Or really understanding which platform works for in each situation?
I'm learning proper data flow in real time as I look to transition ETL of product data into Postgres to a more applicable system.
Finding the right learning resources is difficult! Cheers.
(1) Are there any plans to make this compatible with the ducklake specification? Meaning: Instead of using Iceberg in the background, you would use ducklake with its SQL tables? My knowledge is very limited but to me, besides leveraging duckdb, another big point of ducklake is that it's using SQL for the catalog stuff instead of a confusing mixture of files, thereby offering a bunch of advantages like not having to care about number of snapshots and better concurrent writes.
(2) Might it be possible that pg_duckdb will achieve the same thing in some time or do things not work like that?