Hacker Newsnew | past | comments | ask | show | jobs | submit | terminalhealth's commentslogin

Are there good alternative building materials? Perhaps wood?

Edit: It seems I am shadow banned. I am going to take this as a deep insult.


No, it’s worse. You are not shadow banned, people just don’t care about what you have to say.


Harsh.


I agree, this is a harsh place.


Wood works. There are some new methods to glue sheets together to make long structural beams and much taller buildings than before.

Cross-Laminated timber or CLT.

https://en.m.wikipedia.org/wiki/Cross-laminated_timber

12 and 18 storey structures have been built.

As far as fires go, yes, they will burn eventually, but they won’t weaken as quickly as steel. You get more time to evacuate.


Awesome. Outside of a few strange political situations like HK/Singapore we probably don't actually need to build structures higher than 12-18 stories as well - most examples I can think of are vanity projects.

It strikes me that there might not be a replacement for concrete in other forms of building though - what about dams, bridges, etc?


They’ve built bridges (or at least a bridge) with this kind of wood.


I can see your comments, I don't think you're shadowbanned.



> I stopped using GTK applications a while ago

So you're only using something like ‎i3, ‎XMonad or ‎dwm?


Ah, no, I'm using lxqt at the moment. It's not perfect but it's okay, and I trust its developers' common sense. Every once in a while I fire up Plasma 5, too, but I'm not convinced. In the last year or so it seems to have gone down the mobile-inspired UI rabbit hole and it's getting increasingly awkward to use.

Other than that, I use the same kind of applications you'd expect any desktop user to use, from file managers to text editors and from office suites to web browsers. Only most of them are Qt-based :). The only GTK applications I still use are Firefox (GTK-ish) and Emacs, which can still be compiled against GTK2 which is a good enough compromise for me.

It's nothing personal, I don't want to get into a big rant about how Linux used to be about choice and about how Gnome is a Red Hat conspiracy, or whatever flag is being waved in 4chan & friends. I just don't like GTK3's UX choices. Oversized widgets, awkward scrollbars and hamburger menus aren't pleasant to use on a 30" monitor. A few years ago I tried to get around that with a custom theme, but theming is... kind of frowned upon in GTK land, so it didn't end well.

But, you know, their code -- their choice. I don't like the choices but I think it's a big step forward from the '90s and a big test of maturity for the open source community. Making (what I believe to be) our own mistakes is way better than cargo culting Microsoft and Apple, which is what we've done up until 5-6 years ago.

Edit: also, why are you folks downvoting the parent comment? It's a productive question, even if it looks uninformed (i.e. I'm guessing the downvotes are because it's conflating "GTK applications" with "desktop applications", which looks silly but have you had a look at Ubuntu lately?). If you think that's a stupid question, fine, but are you really that sure you've never asked a stupid question in your life?


>Every once in a while I fire up Plasma 5, too, but I'm not convinced. In the last year or so it seems to have gone down the mobile-inspired UI rabbit hole and it's getting increasingly awkward to use.

Can you cite an example of this? I use Plasma 5 and I genuinely don't know what you're referring to - in fact I really enjoy the way the keyboard is a first-class citizen, with sensible single-key hotkeys for all common actions. I'm more productive in Plasma than in any other DE - it's full of wonderful little power features that make life pleasant, without sacrificing ease of use if you haven't gotten around to learning them yet.

I know there's some effort at mobile convergence behind the scenes, but I can't percieve any negative impact to me day to day at all.


> Can you cite an example of this?

Sure, several :). For example:

- The tree layout of Settings Manager is no longer around (it can still be enabled at compile time, though, and I think e.g. SuSE still does it). The default one is the strange hierarchy/screen-based which is pretty obviously meant for touch interfaces.

- The same layout is used by Discover and it's very obviously a mobile design. Actually, anything Kirigami-based is like that :).

- The new Virtual Desktops settings page in System Settings is very obviously reworked from the same perspective. Instead of two spinboxes (number of rows, number of desktops), you now have to click "Add" to add a new desktop, then manually edit its name (by default, they're all called "New desktop").

- The default Alt-Tab switcher, which is remarkably awkward to use from the keyboard (switches applications and brings windows to the front at every step, somewhat like Fluxbox' alt-tab if anyone remembers that) is actually very smooth to use on a touch device. You can sort of see that on a desktop, too, if you try to use your mouse the way you'd use your finger on a touchscreen. You can hold-to-scroll through the left-hand side view and switch to a given window by clicking its thumbnail.

(Edit: plus the usual suspects: oversized widgets and titlebars, large icons with humongous space between them etc.)

The good thing is that Plasma is super configurable. I don't mind changing default settings that I don't like. In fact, I'm all for fashionable defaults, I completely understand the dynamics involve there.

But I also don't trust that I'm going to be able to change these default settings 12-18 months from now -- not to settings that are appropriate for a desktop machine with a large monitor, in any case. And KDE is a big beast. It's hard to migrate settings even between two computers running the same KDE version. If you go all in, it's hard to turn back. I've already done that once with KDE 3.2 and it took me months to sort it out when 4.x hit the market. I'm not sure I want to do it again.

Edit: FWIW, I do try to keep an eye on it because it's actually the only Qt-based desktop environment that's unlikely to get abandoned soon :). Plus, while Plasma 5 feels a bit bumpy to me, it's definitely better than KDE 4, and that's a big deal. Last night, in fact, I tried my hand at hacking on Breeze a little, to make it slightly more compact. It's definitely better than other flat themes but boy is it awkward to use with those huge widgets. If I can get it to look okay, I'll try to see if there's a way I can get this into upstream, too (maybe make some things configurable?), or just package it separately as a compact theme for me and anyone who's interested.

Historically, the KDE community has been super friendly and willing to help newcomers. I'm not sure if it's the same now that there's a visual design group and whatnot, but I'd definitely rather write code than whine about software I get for free :).


Settings Manager: on my machine I can configure it to have the tree view in its hamburger "settings" button, at runtime; no recompile required. Experimenting with it however I personally prefer the default "sidebar" style, as I can see more sections at once - the hierarchies don't pay for their overhead in my opinion.

Similarly, I don't think you can chalk the change in Virtual Desktop interface up to optimizing for mobile at the expense of desktop. You still have a spinner for "rows", so it's not like it's an effort to get rid of spinners. I think it's more likely to do with trying to help users organize and categorize their activities, which is an angle they've been pecking at in various forms for a while. It's not any harder to click "Add" than the up-arrow on a spinner, and it's not any harder to click "-" to delete a desktop than a down-arrow on a spinner; the difference is that now desktops have identity instead of being fungible units, and these labels show up if you hover over the taskbar desktop switcher (a decidedly non-mobile feature, as you can't hover on a touchscreen). You can also jump straight to them with KRunner, which I imagine is great if you have a lot of them. Maybe it's a silly feature, but it doesn't really hurt to let them try - it's not like you're adding desktops all day.

Alt-tab - again, it's trivially configurable with a checkbox "show selected window". I don't think one default is obviously better than another in this case, although now you've pointed it out I think I'll try it unticked for a while. Arguably, changing windows "immediately" is more intuitive for new users. At any rate I don't see the relationship with touch. Yes, I can click on the window thumbnail - a lovely feature! Better than endless cycling when there's many windows open. Incidentally, I found the correct settings page for this by typing "alt-tab" into KRunner - incredible!

Anyway - I don't think making a touch-friendly interface is bad, as long as it's not at the expense of a no-touch interface. I've never felt like the inability to touch my screen has limited my expressive power in Plasma. After all, things which are easy to touch are also easy to click on! There's no real need to make interfaces tiny and fiddly. As for defaults, I wouldn't worry too much about these settings going away. This isn't Gnome - KDE's entire design philosophy is based around configurability.


Yeah, I don't know if I gave the right impression with that last post. I don't think Plasma 5's non-touch experience has been going down disastrously lately. There's nothing that screams "made for mobile" that you can't change (even Breeze's huge widgets, I mean, there's always other themes). And if touch devices are fashionable and is what gets people interested in KDE and gets contributors on board, I'm by all means happy if that gets to be the default :).

But I'm not convinced the "not at the expense of a no-touch interface" part is going to stay true for long.

FWIW:

> Settings Manager: on my machine I can configure it to have the tree view in its hamburger "settings" button, at runtime; no recompile required.

Some distros still enable that feature but it's a distro-specific thing. I don't know if it's maintained anymore.

> Similarly, I don't think you can chalk the change in Virtual Desktop interface up to optimizing for mobile at the expense of desktop. You still have a spinner for "rows", so it's not like it's an effort to get rid of spinners.

No, but it's kind of awkward to use :). Clicking "Add" four times gives you four desktops with the same name, for example. It's definitely not an easier interaction model than the spinner-based one.

...but this sort of discussion (is it better to have an extra spinner, or an extra button and manually edit each desktop's name? Which one is more intuitive? Which one is more discoverable? Which one is etc. etc.) is kind of a bikeshedding dead end to me. As long as it receives bugfixes, as opposed to rewrites, for the foreseeable future, I couldn't be happier.

The bit about bugfixes vs. rewrites isn't just whining, it's kind of a pain point for small-time contributors -- which, realistically speaking, is how 90% of independent developers get into a project, we're all small-time contributors first (unless you're hired by a big company to work right on something full-time, which is increasingly common in the FOSS world, but not specifically for desktops). It's pretty hard to motivate yourself to contribute a fix when you know it's gonna be useless one or two years from now. Feeling like you're participating in the steady improvement of a thing is pretty nice. Feeling like you're participating in the perpetual churn of an eternal beta isn't much fun.


Hi, I'm still on lxde, and I wonder how lxqt compares at this point in time. But if you try to search the net you only ever get comparisons to xfce, mate, or whatever.

Would you be so kind to give me your take on lxde vs lxqt?


I haven't used lxde so I can't tell you much about it. What I can tell you is that it has a "start" menu, a taskbar, a desktop switcher, icons on the desktop and Openbox :). It doesn't do anything that you haven't seen before, but pretty much everything it does, it does reliably.

It's... I dunno, it's like FVWM95, only from this century. I used Openbox + a bunch of tools cobbled together before. It doesn't do anything my old setup didn't do, but it sure is more comfortable. No scripts, no custom setups... all that was fun twenty years ago but I'm not a teenage l33t h4x0r anymore, I got work to do nowadays...

I'm not sure why people conflate it with Lubuntu, you can use it on any distribution, and the fact that it's very easy to package means there are few packaging-related problems with it. It's also pretty easy to compile from source if you need that, for whatever reason.


> it's like FVWM95, only from this century.

I wonder how you'd feel about Trinity Desktop Environment? I feel like Trinity and Lxqt are in the same space when it comes to system requirements.


They're a world of difference apart! TDE uses its own fork of Qt 3. Qt 3 is huge, and while Timothy Pearson, who maintains it, is an extraordinarily capable programmer, I doubt he and his team can maintain it and TDE that well. LXQT, on the other hand, can be compiled against the latest Qt 5 libraries. Besides, TDE isn't "just" the desktop, there's a whole application suite in there, too. I doubt that you'll get proper, 2019-level support for TLS, for example, in those applications. Bugfixes are occasionally committed but whether or not they're enough is anyone's guess.

Plus, you get all the usual problems, like inconsistent theming (Qt 3 engines, unsurprisingly, don't work with Qt 5; Plastik, CDE, Motif and Windows are in both Qt 3 and Qt 5 but you need to manually add color palettes etc.)

Frankly, even though TDE pushes all the right nostalgia buttons and even though I instantly feel better about anything with Pearson's name on it, I don't think I want it on my systems :). I tried it and it's fun but a bit difficult to use in 2019.

They're in the same space in terms of requirements but a very different space in terms of bugfixes, compatibility and perhaps security.


I have used both.

I was very sure they have messed up a perfect thing that was lxde, but 5 minutes of using lxqt and configuring it, I had all the good bits of lxde with a lot of awesome new bits.

I was sold on it until it (18.10) booted to a black screen reboot loop and basic debugging didn't resolve the issue. I didn't have time to nail the cause but it could have very well been me tweaking config.

I am back on 18.04LTS but will be upgrading happily to lxqt at 18.04's EOL.


> I didn't have time to nail the cause but it could have very well been me tweaking config.

I've had a couple failed Ubuntu upgrades recently, with similar behavior ("oops, you can't boot anymore, sorry!"). And I only use it on one fairly boring machine doing fairly boring things with it—I don't tweak anything.


EvilWM is a nice non-gtk window manager (though I don’t actively avoid GTK, it has been my daily driver on and off for years).


KDE native apps are built on qt too


This was what I was wondering to. I'm still learning about the ins and outs of Linux.


> recurrent projections would need to (1) form a 3rd party connection at every synapse and (2) know which synapses were to blame for the error

The most striking result in this regard is that one can get backprop with random backprojections. It works regardless because, on average, the error vector will be less than 90° from the true error vector (which is good enough for hillclimbing) and the dynamics play out in a way that the learned weights adjust to the random projections:

https://www.nature.com/articles/ncomms13276

That being said, it does seem to be the case that the brain simply memorizes an awful lot which must work by a different mechanism besides backprop because backprop cannot do one-shot learning. I think one-shot learning is how the brain gets past large discontinuities in the model fitness landscape: It can learn linguistic, logical rules, fragments of general computations and facts that are discovered by exploration (which includes learning about whether it was Bob or Mike) and passed on culturally. The brain basically outsources the problem of tunneling through large discontinuities, to cultural/individual trial-and-error and episodic memory. The greatest consequence is that these bodies of knowledge can concern the improvement of the organization of knowledge itself, resulting in a positive feedback loop in model fitness (especially science/Bayesian updating). Though such bodies of knowledge evolve respecting a learnability-by-hillclimbing soft constraint, which implies they often form a neat latent space where similar codes are organized to belong to similar meanings/representations, as this can easily be learned by stochastic hillclimbing (repetition) because each time the brain processes related information, it is nudged towards the latent space that is meant to be learned. Many parts of the world happen to be learnable in this way because everything is kinda smooth and continuous. Small causes tend to have small effects as everything consists of a myriad of small particles that affect each other in smooth ways if you squint at them. Though obviously not everything can be learned this way (implying large discontinuities) which is where a brute memorization based on reward and punishment comes in handy.


> brain simply memorizes an awful lot which must work by a different mechanism besides backprop because backprop cannot do one-shot learning

If you look through a neuroscience textbook section on memory systems, it's commonly suggested that the hippocampus does the one shot learning and transfers that over time to the cortex. This is backed up by clinical case studies.

> The brain basically outsources the problem of tunneling through large discontinuities, to cultural/individual trial-and-error and episodic memory

That seems like a good strategy. It also reminds me of AlphaGo's Monte Carlo search + neural network training setup. Since the search is non differential, you do lots of simulations and apply a differentiable DL model to the results to approximate a possibly discontinuous landscape


> If you look through a neuroscience textbook section on memory systems, it's commonly suggested that the hippocampus does the one shot learning and transfers that over time to the cortex. This is backed up by clinical case studies.

HC's role in episodic memory and consolidation via dreams seems kinda plausible, though I would not put much weight on it. I think dreams are a way of training a GAN-like discrimination between reality and imagination:

http://gershmanlab.webfactional.com/pubs/GenerativeAdversari...

Repetition of any kind likely does improve the model, even if it's merely simulation/dreaming.

> AlphaGo's Monte Carlo search + neural network

I think, in effect, MCTS amounts to something like bagging/boosting/mixture of experts, as it computes a weighted average of the predictions when exploring different branches. But sure, the search mechanism implements a function which a recurrent neural network could probably not discover as it hides behind substantial discontinuities in fitness landscape (it's not a structure which you can uncover step by step, but you immediately need tree structure, a search recursion etc.). The RNN would likely need to conceptualize the search process (subvocally but) linguistically like humans do, which requires structure for the sequential composition of stable prototypes (symbols) which likely requires a one-shot sequential memory. I think even the human mind does not literally do MCTS (would require an overhead that the brain is just not capable of), but some heuristic approximation thereof. The brain can simulate MCTS by linguistic means, though, even if it's just words of wisdom like "take counsel with your pillow", which literally means explore the hypothesis space some more and let the temporal differences backup better value estimates.


Very interesting also is that you can directly send the error to intermediate layers through sparse random projections without the need for any layerwise backpropagation. This relaxation of the structure of the backward pass makes bp even more plausible from a biological perspective.

https://arxiv.org/abs/1609.01596 https://arxiv.org/abs/1903.02083


Fwiw, afaik, cameras are relatively poor measures preventing burglary. By far the most important factor is reinforced windows and doors.


What's the recovery rate of corals that have experienced heat waves? I'd imagine that species which can withstand the heat are going to fill the empty spots?


https://www.nationalgeographic.com/environment/2019/04/great...

As the climate and the Earth's ecosystems move into unprecedented states (including treating it like a tensor and including the rate of change) there is little reason for that degree of optimism.


ocean acidification is likely going to wipe them out regardless of temperature


It was an overreaction which, however, was only enabled by absolutely lazy journalism. They basically took Bouman's Facebook post with a photo of her smiling next to the black hole claiming that she produced the picture, blowing her contributions way out of proportion. Some reports corrected this later that day, but by then, the shitstorm and investigation had already started. Perhaps understandably--she did not produce the picture. One could argue that Bouman's reaction was also way too delayed and she did not enough to clarify the situation, but this is perhaps understandable assuming she did not follow social media very closely.

The entire fiasco was mainly caused by the obsession of the media to put women at the forefront.


Most of these kinds of products are built to fail so that you'll pay for a new one soon.


Don't forget that the CO2 footprint of these things is immense. Though take growth away and things will be come zero-sum; people will try to take things from each other rather than from growth, and will find petty reasons for conflict which will escalate sooner or later. Either way, we're likely f*ed.


The fact that she can lead a 20-minute interview ...

https://www.bitchute.com/video/WycgzQE8t4LS/

... rules out (2). Technology is not advanced enough for (3). So (1) is currently the most likely: https://www.reddit.com/r/13or30/

Though you've missed the less likely option 4), an extraordinarily gifted 13-yo girl spending too much time on the internet. Some kids are amazingly smart.


Technology is already in use for #3: https://www.bbc.com/news/blogs-trending-49151042


Yeah, but the technology is not good enough for the level of authenticity here.


They also have a "Ministry of Transport and Digital Infrastructure" as if expertise in one domain would easily transfer to the other. The only commonality seems to be that things are being transported from A to B via some network, but it pretty much ends there.


Oh, that's not really an issue, he shows very similar levels of incompetence in both areas.


His predecessors from CDU/CSU also. Infrastructure ministry is dominated by a history of politicians hellbent on kissing the automotive industry's butt...


I second this


Strictly speaking, aren't they both just series' of tubes? /s


Let's join also the ministry of health then! :)


I think the money comes from the same budget, and it's both cases infrastructure. At the end of the day you don't need that much expertise to allocate funds, it's more about politics.


>you don't need that much expertise to allocate funds

You need expertise to allocate funds wisely however.


Digital infrastructure is mostly privatized (and monopolized).


it does kind of make sense if you consider that roads and rail lines are good places to put data cables, and that you have to dig up roads for the cables to be placed. so having the authority do do so within one ministry sounds like a good idea.


Fair point, but they do invoke this ludicrous metaphor of the data highway, also why isn't it then simply called "infrastructure". There are surely other large scale planning decisions that could be subsumed e.g. pipelines, power lines etc.; why would they not be mentioned?


Competence doesn't seem to be a requirement in these positions.


Ultimately they are both "realtime logistics" and the minister better have expertise in longterm public projects then either. But yea, sure they better be seperate.


The infamous "Datenautobahn".


Sounds like a pretty close translation of the "information superhighway".


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: