Hacker Newsnew | past | comments | ask | show | jobs | submit | Zevis's commentslogin

I'm with the other two replies. My recommendations match my interests quite well. Probably due to what videos I like and others that I hide because they're not relevant to me. I don't think I'd be using YT if it weren't for the video suggestions.


Might just be you're doing too much exercise. If I over-reach too far, I end up not being able to sleep all night.


Personal anecdote: It seems that some people produce metabolites that disrupt their sleep when they exercise, even at low levels of exercise.


I get something like this even from a relatively light exercise (~40 minutes of pilates reformer). If I do it after 6pm my body feels very hot and I have trouble sleeping even 5-6 hours later.


There is the concept of "hypermetabolic" where it is hard to sleep after sustained exercise essentially because your body is so fired up. I have experienced this, but it really has to be an unusually long run, at least 30km usually more. It would not surprise me that you're correct, and the threshold for this state varies by person (and almost certainly fitness level) so for some it could occur at low levels like you say.


Looking inside of myself just makes me more restless. It's about the worst thing I can do if I actually want to sleep. Exercise has been a great help in shutting off my mind at night.


We don't need perfect immunity. Canada has shown that focusing on first doses over "full" immunity can work quite well at a population level.

The flu vaccine is around 10-40% effective. Bu suddenly we pretend like if a vaccine doesn't give us 98% immunity, it's useless? Seriously?


As a Canadian, I’d caution against firm conclusions from our experience. I’m inclined to think first doses provide good protection, but cases collapsed last summer too despite a relaxing of NPIs. It is hard to overstate how much canadians are outside in summer vs. other seasons:

* School is out, university is out

* Air conditioning is comparatively rare. Or if present, unnecessary. People open their windows and doors instead. I’m typing this now with an open door with bug screen in front.

* Drivers roll down their windows

* People socialize outside in parks and backyards

* The population tans all of a sudden, raising vitamin D levels. Normal respiratory virus spread collapses

* Patios flourish at bars and restaurants. Often the inside is fairly empty. Window-walls open completely leaving restaurant/bar interiors open to the fresh air

* Many offices and professions have summer holidays

Then by mid September we hermetically seal ourselves inside and everyone gets sick once school is back.

I’m cautiously optimistic we’re past needing lockdowns but it is still too soon to tell. Our summer had a powerful seasonal effect last year and seems to have this year too.

(Past needing lockdowns = we’ll get enough new first and second doses in over summer while the season is on our side)


I agree with what you're saying, and I noticed that trend earlier when people were talking about 70% vs 99% effective vaccines...

But IMO, the flu vaccine is pretty useless. I've never had any confidence that it's actually doing anything for me or anyone I know. I notice no difference between years I get it and years I don't. And if the Covid vaccines were only providing 10-40% protection, this whole situation would be a lot worse.


I feel like we're already seeing the effects of chemicals like lead now. How many older people are suddenly radical Trump or QAnon supporters?


Working in PC repair, a majority of my customers can't just up and buy a new PC just because Microsoft arbitrarily decides that whatever they have isn't good enough, even though it has sufficient performance.


And they don't have to. They can keep using the PC they have until 2025. After that, there's always Linux Mint.


I seriously hope the EU becomes aware of this and forces MS to change this crap. It's actually insane from a sustainability standpoint.


It would be unreasonable to force MS to list old hardware as compatible in any case, especially given that 10 is officially supported until 2025.


I had no idea they'd gone back to 4 numbers for some of their CPUs. What does G7 even mean?



Yes but that's the normal pattern: the historical naming was XYYY where X was the generation and YYY was the SKU. With the 10th generation, this became XXYYY, continuing the pattern.


Graphics Level 7. Basically a relative indication of the iGPU capabilities.


It's still unclear to me why anticompetitive behavior is so hard to understand for people here. Free markets don't exist. Stop pretending they do. Apple has a crazy amount of power over companies selling apps on their platform. Power that would easily be considered a monopoly and anticompetitive if brought to court in, say, Germany. The fact that Apple doesn't own the smartphone market completely is irrelevant. What matters is their potential and real power to unilaterally decide on contract/business matters affecting any number of companies that use their platform.


I'm confused. I easily get 8-10 hours on my Core i7 laptop as long as I'm not at 100% CPU usage permanently. Modern Ryzens also hit this easily. So I don't know why 8 hours screen on time on a tiny screen with an incomparably weak CPU is supposed to be proof of how much we need ARM.

Apple's M1 is in a class of its own, unfortunately.


I’m also frequently confused when people refer to Windows laptops as in a different league of battery life. Modern Windows laptops do just fine. M1 is a cycle ahead, but the Ryzen mobile parts are actually very good too.


Lots of people buy 15 inchers with dedicated graphics etc. Or like... run Linux (which hoses the battery on a lot of stuff if you just use it out of the box without tweaking stuff).

Also really cheap laptops aren't as great about battery life just cuz of reasons. I think there's a lot of selection bias (like people who have a lot of money will buy macbooks in general cuz that's what everyone says to do)


I've used various distros on my recentish Dell laptop. I get the advertised 8-10hrs without configuring anything.

It just werks.


I configured my Thinkpad X280 under Arch to run at roughly 3 Joules/second. Which gives me way over 12 hours of active use time.

Linux is far superior to windows in this regard.


Unless, unfortunately, you've got switchable nVidia graphics. As of last year, at least, without some real gross stuff you're looking either at awful multi-monitor performance on AC power (because the dGPU is permanently switched off) or awful battery life (because you're using the nVidia chipset and it eats batteries for breakfast, lunch, and dinner).


Nvidia's Linux support story has been crummy for the last half decade, if you bought a laptop with an Nvidia dGPU and are expecting it to work efficiently in Linux then you need an expectation reset.


Yeah, for graphics it sucks. OTOH, for CUDA it's literally just plug-and-play (at least on Ubuntu 20.04).

Makes a nice change from five years ago, when I kept breaking my display in order to make CUDA work.



> Joules/second

aka Watts


To be fair the laptop is rated for 16.5 hrs.


Yes, in some weird lab setting with 12% screen brightness and no user activity.


a friend got X250 with no tuning to work continuously for something like 26 hours.

But thinkpads didn't tend to remove batteries to get slimmer then.


how's the keyboard compared to 2015 macbooks?


It's very much OK. It's different but it's still good. I prefer the MBP (provided 2015 is still the old non-butterfly mechanism) but I would have no problem with switching 100% to the TP keyboard.


Why isn't Linux configured to better preserve battery life on laptops by default? Power usage has been an issue for over a decade. If it's just a matter of a few tweaks, surely it could be done?


It's not a matter of just a few tweaks. It's a few new tweaks each generation, as Microsoft and Intel keep changing their minds on what the "right" way is to put devices into low-power modes, and then not thoroughly documenting those changes let alone upstream them for inclusion in standards documents.

And then there are the hardware bugs which require workarounds in firmware or drivers, and the firmware bugs which require workarounds in drivers, all of which are only developed and tested against Windows.

The set of power management options that Windows 10 exposes to the user has been steadily dwindling to the point that on a new laptop you basically only get to customize how long before the screen shuts off and how long before it goes to sleep. All the more detailed options you had in early Windows 10 or in Windows 7 are no longer exposed, because there's no consistent way to map such controls onto the ever-shifting set of underlying platform features.


Distros don't even automatically install and configure TLP when they detect an internal battery. So yeah, it is 'Linux' that is the problem on laptops.

That and automatically activating a 'small speaker' EQ for when a device is detected to be portable / be a laptop and have internal speakers would be a massive user experience improvement for laptop Linux users.


> Distros don't even automatically install and configure TLP when they detect an internal battery.

TLP isn't magic. More than half of its documented options are inapplicable to current hardware or kernels, and a fair number of the remaining options that could still have an effect are not safe for distros to ship as defaults, usually because they'll trigger firmware or hardware bugs. Installing TLP by default would be far from an actual solution, and isn't even that great a first step toward solving platform power management inadequacies.


A few tweaks for each different model of laptop. Consumes a lot of volunteer effort very quickly, that.


And some of the tweaks can cause instability or data loss on different models with different quirks.


Mostly speculation, but I think a big part of it is that MacOS will heavily throttle as it needs to, and I imagine that Windows laptop drivers get a lot of love in that space too since the manufacturers want to get good battery life in their machines as well.

I'm sure a huge component of it is just people-hours spent in figuring out the right balance of defaults that don't just mess people's setups up.


It's up to each distribution to provide a kernel build + configuration that suits some specific use-case.


Pop OS has it's own power management that is pretty decent. I can almost get through a whole day of work without plugging in.


I suspect it has todo with getting decked out machines. If you've got the >1tb ssd and the >=32GB RAM model, you will see your OS simply use the ressources for caching and speeding up the machine, which in turn will use more battery.

At least that's my pet theory, running an 4750U with 64GB of RAM installed ;)


The RAM probably uses more power just by having more installed, but it shouldn't be causing more expensive work. If anything, being able to use RAM cached content should allow the SSD and its IO channels to idle more often. A "decked out" laptop might also have a more powerful GPU with nowhere near as good idle power characteristics.

But, the largest differences are probably in idle power management, with the phone having much lower power IO channels and better zoned power management to really reduce idle power. A laptop often has the system in a much higher power level with off-chip resources powered up, including WiFi, screen refresh, data buses, and backlight driving a much larger screen area.

Edit to add: I've noticed on laptops with Linux that vastly different battery life from "similar" machines come from differences in how the software and firmware interact to reach different powersaving modes for screen on but idle states, such as with a browser open on a page that has already rendered and awaits user input.


That’s not quite accurate. When you’re talking about standby time in suspend-to-ram, RAM refresh is actually a non-trivial battery cost because it’s basically you’re only active power draw remaining. Which DIMMs have active memory pages is a partial function of memory usage.

So if your kernel woke up (let’s say after 30 min of suspend to ram) to try and see if it can reclaim physical DIMMs and mark them unused (shrink caches, reorganize pages, etc) you could get quite a big win on standby time. The trick is how to do this without actually hurting battery life (waking up can be expensive and there’s no guarantee you’ll be able to free up DIMMs) and preserving wake from suspend performance (if you’re phone is laggy on wake that’s an experience users will switch away from + you could eat your entire power savings paging back in).


Is the memory controller aware of unused pages in physical RAM that thus can be safely skipped during refresh? That would surprise me. In stand-by the self-refresh mode is used, and I believe partial refresh is only supported by the low-power (LPDDRx) standards and is very coarse-grained.


No. What I heard proposed was just at the DIMM level. The memory controller has no concept of memory usage.


The refresh costs you are talking about are what I meant by the RAM has cost by being installed on a typical laptop. The power is used because the machine is in an active-idle power state with that much RAM installed, not because the OS has decided to burn more CPU time in the presence of more RAM as someone suggested earlier in the chain. Most of the idle power consumption is all the ancillary system controllers, not the CPU cores themselves. These are the things that are better managed on a typical smartphone SoC platform.

And, I am referring to active idle, not suspend-to-ram scenarios. I think people are talking about screen-on time for battery life, not lid closed and suspended nor lid open but screen disabled. Precious few people are enjoying a laptop that successfully performs a suspend to RAM sleep state while keeping the display alive. Often, the integrated GPU is continuously scanning framebuffers in system RAM and outputting pixel data to the embedded display port even though nothing is changing on screen. Frequently, drivers have disabled LCD panel self refresh modes due to unresolved glitches and artifacts in the graphics stack.


Yup. One of the major reasons cell phones outperform laptops on battery life is because they go to suspend to RAM aggressively. I think a similarly aggressive mode would be needed for laptops on battery power to compete effectively (& might require OS optimizations to make the wake scenario to be as instant as it is on mobile).


It's a reality distortion field and maybe effect of not blocking OSX's default "go to sleep the moment you stop looking at it" policy. Macs were stuck at (marketing) 10h battery lifetime for so long (for comparison - Xeon mobile workstations with few times the power regularly hit 10h on battery without tuning), that M1 feels like huge jump out of nowhere.


Yeah. Consumer CPU power draw is largely a solved problem. Total aggregate processor energy use on a typical Intel laptop is already down to somewhere around 20-30% of what the display backlight pulls. That's good enough, you're just not going to do much better on a laptop form factor.

The M1 isn't really that notable for "battery life", it isn't. All the people raving are people hitting particular edge cases of high CPU utilization that consumers (even developers) generally don't see when browsing and watching. The Apple power magic is all happening in phones.

And the magic of the M1 is that they have achieved desktop-class (nearly market-leading) performance in a chip that still draws like a phone at idle. It's an amazing piece of engineering, but in a laptop it's really just an incremental improvement over what we already have.


> Apple's M1 is in a class of its own, unfortunately.

We want companies to excel and give a stiff competition to their peers. Now that we have M1, it's pushing the entire floor to the next level - pushing Intel, AMD and the entire x86 ecosystem.

Curious, why do you think it's unfortunate? Is it because Apple is a large company?


Not the parent, but: it isn't unfortunate that the M1 is so good; it's unfortunate that everything else is so far behind.


Is it that far behind? Benchmarks aren't everything, but it's probably as good as other metrics. A14 is ahead of SD888 in Geekbench (which has historically favored Apple CPUs), and behind on Antutu (which is more considerate to Snapdragons).

Geekbench 5:

- Single core: A14 1609, SD888 1127

- Multi core: A14 3872, SD888 3731

Antutu:

- A14 615796, SD888 723674


Benchmarks aren’t everything as you said. I used to have the last gen MAcBookPro with Intel for 3 years, and replaced it with M1 MacBook Pro. It’s almost an apple to apple comparison as they are same spec except the new processor. The difference is night and day. It just simply feels faster to use. It last almost 1.5 workdays of my use, while the intel one would not even last one full work day. I don’t need to lug around a power brick anymore. I charge at home and bring only the Mac to work without feeling anxious about battery. The cost I pay is that building x86 Docker images are quite slow on M1, but I don’t do it that often to bother me much.


> and behind on Antutu (which is more considerate to Snapdragons).

It is more considerate to Multi Core workload.

The main point is Single Core benchmarks. Which A14 set itself apart to all of its competitors.


Note that M1 is faster than A14:

Geekbench 5:

- Single core: 1744

- Multi core: 7676


The quoted comment referred only to battery life.


If Samsung just commits to outspend Apple on the node commitment, transistor counts, and size, then the outcome of the race may not be so pre-determined yet.

Transistor for transistor, ARMs latest licenseable cores are not so much far behind.


Last time I checked Samsung tablets and phones, they lagged already in the store. I don't know whether it's all the crapware that gets pre-installed, or just inefficient programming. But I think CPU speed might not help them.


Their Exynos are infamous for being slower, running hotter and using more power than equivalent Snapdragon and even Kirin (and of course, Apple SoCs).

But I doubt that shows in light usage like scrolling or opening an app, that's more likely their (lack of) software optimization.


My experience is that Exynos hw was great... but Samsung software (including drivers) was problematic.

Combine to with legacy of android design going for flexibility, including allowance for inefficient options (especially important in comparing graphics - Several Apple models ran on edge of being able to paint one frame without stutter assuming well-optimized code - Apple spent a lot of time ensuring you didn't see that) and you get certain reputation.

The only thing that I noticed really problematic is graphics intensive software optimized for Qualcomm.

Also, the real issue is not that Samsung doesn't have the capability. A lot of Apple "secret sauce" is that they don't have contractually separated design teams that have to "shop" around for suppliers/buyers, which means that both Qualcomm and Samsung are forced to make more mediocre CPUs because it brings wider selection of buyers - whereas Apple can design device with SoC together, which let's them easily take decisions like "ok, let's put A LOT MORE L1/L2 cache on each core" because they aren't going to deal with customers not wanting to buy them.


But Samsung has its own huge mobile business. Wouldn’t it be justified to make an OP SoC for their Galaxy Tab(or whatever they sell)? I’m sure their execs are aware of the possibility.


Supposedly they are forced to keep a "chinese wall" strategy due to possible legal concerns. How true that is hard to check, but there seems to be considerable variation and definitely less "design this cpu specifically for this phone" in Exynos line up, except maybe for first Samsung Galaxy S and SGS2.

P.S. Exynos and Apple A-series have common ancestry


Outspending Apple seems like a tall order, to put it mildly.


probably not when you're the second largest


You would need to not only outspend them, but at least match their engineering. Either one alone would be a tall order. But they should at least try, consumers will benefit even from a close miss.


Not OP, but Apple doesn't sell their chips to other companies. It's their right to do so, but if Apple sold M1, that'd be great for consumers.


M1 is that good because Apple customises their software for it(or the other way around). I doubt any other company would dedicate that much effort to it.


M1 is quite good at general-purpose workloads.


Does the processor really dominate power consumption in modern laptops so much that the M1 chip largely explains the long battery life in new Macbooks? The data doesn't seem easy to come by with a cursory web search.


I was considering Zephyrus G14 at some point, so I visited the dedicated subreddit. There was a guide on getting it to ~8 hrs on battery. Man, I lost my enthusiasm right away. Custom fan management, getting rid of bloatware...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: