If not, you should seriously consider switching banks (while you can). I suspect that such banks do not take security seriously: Giving control over your phone to Apple/Google is not security.
This is interesting. Foxtails are pretty common where I live, so common that one species of Foxtail has the name of the city (Bromus madritensis) (Madrid, Spain). Not a single time it has affected any of my dogs or even heard about it being a problem at all. I wonder if it's not all species of Foxtail
This is for when you receive JPEG encoded DICOMs. You transcode them to JPEG XL (saving that 20% of storage) and then, if a modality/viewer/whatever that needs JPEG requests them, they're transcoded on the flight to JPEG losslessly.
Losslessly meaning, with the same quality than the original JPEG received by the storage.
Max Richter, John Cage, Tangerine Dreams, Klaus Schulze, Gavin Bryars, Richard Chartier, Asmus Tietchens, Tomaga, Boards of Canada, Stars of the Lid, William Basiniki, Joanna Brouk, Pauline Oliveros ...
Drone Zone on SomaFM (free internet radio) was how I discovered a lot of that stuff. Although they don't play the old classics as much these days, it's still good and they have a few similar stations there https://somafm.com/player24/station/dronezone
I generally find Deep Space One more appropriate for most of my coding, though I used Drone Zone a lot many years ago.
I've been supporting SomaFM for more than 20 years now, and am so grateful for it. Not just the ambient stuff, but Secret Agent and several others too.
I guess I agree (used to be a massive 90's EBM collector together with my ex, though I kind of got out of the loop of EBM end of 00's / start of 10's). Seeing a Woob album from 1994 recommended a few comments below <3 for CBL, I do like the track ~42 degrees.
How we used to find music: go to the record store every week to listen to whatever you couldn't afford, look at P2P networks at people who like similar music as you, and browse their collections. Eventually, use Discogs to search. Or simply talk with other people (at parties, on the internet) who also like the same music.
How we can find music nowadays: Spotify (and such). I mean, seriously. Their suggestions can open you up to a plethora of new artists. If you then look at the top 10, chances are you'll like some of their work. I found a lot of music this way, for all kind of genres. As Valve's Gabe used to say: piracy is a service problem. Though I am not sure Spotify is so good for the artists, given they earn pennies via that.
..and it is still nowhere to getting and downloading and listening 24/7 to every new release (or, well... trying to), using SMB to the NAS (which automatically gets the releases from a scene FTP) and Winamp locally to add some .m3u files.
I recommend Stair (2:22:22) by datassette for focus and ambient background. The artist recorded the sound of downtown Chicago overnight from his hotel and then processed and mixed this together with processed sounds from MS-DOS strategy game soundtracks from the 80s. Brilliant.
Not OP but I also often to listen to ambient while programming. A couple recommendations would be "Music for Nine Post Cards" and other works by Hiroshi Yoshimura, and "Music for 18 musicians" and others by Steve Reich.
In fact, the use of loops described in this article reminded me of what Reich called "phases", basically the same concept of emerging/shifting melodic patterns between different samples.
I'll second Max Richter's Sleep. Timeline by Edith Progue might interest you too. The later is my favorite xcalm down" CD even before Max Richter's, when I'm too restless to sleep.
And maybe Glitch (music) might be of interest as a starting point, especially the "Clicks & Cuts Series" which gave me a lot of pointers to interesting niche artists.
Biosphere - Shenzhou and Cirque, Stars of the Lid - The Tired Sounds of The Stars of the Lid are favorites of mine. I would also include everything by Microstoria which is not ambient but it works to the same end.
A good place for experimental music is ubu web, in fact Brian Eno is also over there[1].
Edit:
Also if you're a programmer and what to learn a new programming language, then check out SuperCollider[2]. You can use that to create your own ambient sounds. SC has a great library for creating user interfaces along with creating sound.
For a good intro the Sleepbot Environmental Broadcast radio is well worth listening to. Also their write up on how and why they produce the broadcast is really interesting.
A lot of great recs in this thread, but I'll a couple others I didn't see listed yet:
Mort Garson: Mother Earth's Plantasia
Hiroshi Yoshimura: Surround
Satoshi Ashikawa: Still Way (Wave Notation 2)
Shameless plug... Search BirdyMusic.com in Spotify/Apple Music/YouTube Music to hear some ambient music algo generated based on realtime Birdnet detections and weather in my backyard.
i have a 5hr playlist on spotify called lost in the sea of ambien which happens to have many of the artist recos here. title is a reference to haruomi hosono who said he got lost in the sea of ambient in the 80s after leaving ymo.
Yeah, everything's interconnected as Tangerine Dream got to work on GTA V soundtrack. There is this note about that track on Wikipedia:
The track "5:23" is included in the 2008 video game Grand Theft Auto IV and appears on the soundtrack album The Music of Grand Theft Auto IV. In the digital release it is listed as "Maiden Voyage". This track is very similar to, but does not credit, the song "Love on a Real Train" by Tangerine Dream from the Risky Business soundtrack. They had remixed the song for a then upcoming Tangerine Dream remix album but had their effort rejected so released it as 5'23 instead.
Probably a lot of people here disagree with this feeling. But my take is that if setting up all the AI infrastructure and onboarding to my code is going to take this amount of effort, then I might as well code the damn thing myself which is what I'm getting paid to (and enjoy doing anyway)
Whether it's setting up AI infrastructure or configuring Emacs/vim/VSCode, the important distinction to make is if the cost has to be paid continually, or if it's a one time/intermittent cost. If I had to configure my shell/git aliases every time I booted my computer, I wouldn't use them, but seeing as how they're saved in config files, they're pretty heavily customized by this point.
Don't use AI if you don't want to, but "it takes too much effort to set up" is an excuse printf debuggers use to avoid setting up a debugger. Which is a whole other debate though.
I fully agree with this POV but for one detail; there is a problem with sunsetting frontier models. As we begin to adopt these tools and build workflows with them, they become pieces of our toolkit. We depend on them. We take them for granted even. And then the model either changes (new checkpoints, maybe alignment gets fiddled with) and all of the sudden prompts no longer yield the same results we expected from them after working on them for quite some time. I think the term for this is "prompt instability".
I felt this with Gemini 3 (and some people had less pronounced but similar experience with Sonnet releases after 3.7) which for certain tasks that 2.5Pro excelled at..it's just unusable now. I was already a local model advocate before this but now I'm a local model zealot. I've stopped using Gemini 3 over this. Last night I used Qwen3 VL on my 4090 and although it was not perfect (sycophancy, overuse of certain cliches...nothing I can't get rid of later with some custom promptsets and a few hours in Heretic) it did a decent enough job of helping me work through my blindspots in the UI/UX for a project that I got what I needed.
If we have to perform tuning on our prompts ("skills", agents.md/claude.md, all of the stuff a coding assistant packs context with) every model release then I see new model releases becoming a liability more than a boon.
If you find it works for you, then that’s great! This post is mostly from our learnings from getting it to solve hard problems in complex brownfield codebases where auto generation is almost never sufficient.
It's a couple of hours right now, then another couple of hours "correcting" the AI when it still goes wrong, another couple of hours tweaking the file again, another couple of hours to update when the model changes, another couple of hours when someone writes a new blog post with another method etc.
There's a huge difference between investing time into a deterministic tool like a text editor or programming language and a moving target like "AI".
The difference between programming in Notepad in a language you don't know and using "AI" will be huge. But the difference between being fluent in a language and having a powerful editor/IDE? Minimal at best. I actually think productivity is worse because it tricks you into wasting time via the "just one more roll" (ie. gambling) mentality. Not to mention you're not building that fluency or toolkit for yourself, making you barely more valuable than the "AI" itself.
You say that as if tech hasn't always been a moving target anyway. The skills I spent months learning a specific language and IDE became obsolete with the next job and the next paradigm shift. That's been one of the few consistent themes throughout my career. Hours here and there, spread across months and years, just learning whatever was new. Sometimes, like with Linux, it really paid off. Other times, like PHP, it did, and then fizzled out.
--
The other thing is, this need for determinism bewilders me. I mean, I get where it comes from, we want nice, predictable reliable machines. But how deterministic does it need to be? If today, it decides to generate code and the variable is called fileName, and tomorrow it's filePath, as long as it's passing tests, what do I care that it's not totally deterministic and the names of the variables it generates are different? as long as it's consistent with existing code, and it passes tests, whats the importance of it being deterministic to a computer science level of rigor? It reminds me about the travelling salesman problem, or the knapsack problem. Both NP hard, but users don't care about that. They just want the computer to tell them something good enough for them to go on about their day. So if a customer comes up to you and offers you a pile of money to solve either one of those problems, do I laugh in their face, knowing damn well I won't be the one to prove that NP = P, or do I explain to them the situation, and build them software that will do the best it can, with however much compute resources they're willing to pay for?
If you have a counter-study (for experienced devs, not juniors), I'd be curious to see. My experience also has been that using AI as part of your main way to produce code, is not faster when you factor in everything.
Curious why there hasn't been a rebuttal study to that one yet (or if there is I haven't seen it come up). There must be near infinite funding available to debunk that study right?
I've heard this mentioned a few times. Here is a summarized version of the abstract:
> ... We conduct a randomized controlled trial (RCT)
> ... AI tools ... affect the productivity of experienced
> open-source developers. 16 developers with moderate AI
> experience complete 246 tasks in mature projects on which they
> have an average of 5 years of prior experience. Each task is
> randomly assigned to allow or disallow usage of early-2025 AI
> tools. ... developers primarily use Cursor Pro ... and
> Claude 3.5/3.7 Sonnet. Before starting tasks, developers forecast that allowing
> AI will reduce completion time by 24%. After completing the
> study, developers estimate that allowing AI reduced completion time by 20%.
> Surprisingly, we find that allowing AI actually increases
> completion time by 19%—AI tooling slowed developers down. This
> slowdown also contradicts predictions from experts in economics
> (39% shorter) and ML (38% shorter). To understand this result,
> we collect and evaluate evidence for 21 properties of our setting
> that a priori could contribute to the observed slowdown effect—for
> example, the size and quality standards of projects, or prior
> developer experience with AI tooling. Although the influence of
> experimental artifacts cannot be entirely ruled out, the robustness
> of the slowdown effect across our analyses suggests it is unlikely
> to primarily be a function of our experimental design.
So what we can gather:
1. 16 people were randomly given tasks to do
2. They knew the codebase they worked on pretty well
3. They said AI would help them work 24% faster (before starting tasks)
4. They said AI made them ~20% faster (after completion of tasks)
5. ML Experts claim that they think programmers will be ~38% faster
6. Economists say ~39% faster.
7. We measured that people were actually 19% slower
This seems to be done on Cursor, with big models, on codebases people know. There are definitely problems with industry-wide statements like this but I feel like the biggest area AI tools help me is if I'm working on something I know nothing about. For example: I am really bad at web development so CSS / HTML is easier to edit through prompts. I don't have trouble believing that I would be slower trying to make an edit to code that I already know how to make.
Maybe they would see the speedups by allowing the engineer to select when to use the AI assistance and when not to.
Minutes really, despite what the article says you can get 90% of the way there by telling Claude how you want the project documentation structured and just let it do it. Up to you if you really want to tune the last 10% manually, I don't. I have been using basically the same system and when I tell Claude to update docs it doesn't revert to one big Claude.md, it maintains it in a structure like this.
Nah, works without issue. None of the complaints mentioned in this thread is true.
There are some issues wrt corp spyware like intune device management, but the kinks are being worked through and figured out (tldr: required apps from corp must be manually installed when activating profile).
reply