As I said, the initial piracy was an issue. That is what they settled over. Your link covers this:
> A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn’t illegal but that Anthropic wrongfully acquired millions of books through pirate websites.
With more details about how they later did it legally, and that was fine, but it did not excuse the earlier piracy:
> But documents disclosed in court showed Anthropic employees’ internal concerns about the legality of their use of pirate sites. The company later shifted its approach and hired Tom Turvey, the former Google executive in charge of Google Books, a searchable library of digitized books that successfully weathered years of copyright battles.
> With his help, Anthropic began buying books in bulk, tearing off the bindings and scanning each page before feeding the digitized versions into its AI model, according to court documents. That was legal but didn’t undo the earlier piracy, according to the judge.
I understand your thoughts. I've had similar motivation problems about blogging since the release of ChatGPT. Feels like you are writing for a machine rather than readers. Definitely seen a decline in readers since December 2023 on older articles that previously had steady traffic for years.
Also, I just purchased LazyVim For Ambitious Developers. I've used the online edition a number of times in recent months. Thanks for your work!
Excellent deep dive and explanation of the process of tracking down and fixing it. Thanks for sharing it, it was a fun read. Will definitely keep this in mind next time I fire up farcry for some nostalgia!
Nanite is a lot more than just a continuous lod system. The challenges they needed to solve were above and beyond that. Continuous lod systems have been used for literal decades in things like terrain. The challenges for continuous lod for general static meshes are around silhouette preservation, UV preservation and so on. One of nanites insights was that a lot of the issues around trying to solve automatic mesh decimation without major mesh deformation/poor results just disappear when you are dealing with triangles that are just a few pixels (as little as single pixel triangles) in size. The problem with small triangles is a problem called quad overdraw, where graphics cards rasterize triangles in blocks of 2x2 pixels, so you end up over drawing pixels many times over which is very wasteful. So the solutions they came up with in particular were:
- switch to software rasterization for small triangles. This required a good heuristic to choose between whether to follow the hardware or software path for rasterization. It also needed newer shader stages that are earlier in the geometry pipeline. These are hardware features that came with shader models 5&6.
- using deferred materials which drastically improves their ability to do batched rendering.
It's actually the result of decades of hardware, software and research advancements.
The 2 solutions posted in recent days seem heavily focused on just the continuous lod without the rest of the nanite system as a whole.
Also yes, there were also challenges around the sheer amount of memory for such dense meshes and their patches. The latest nvme streaming tech makes that a little easier, along with quantizing the vertices which can dramatically lower memory usage at the expense of some vertex position precision.
There are also pros and cons to this method of rendering, in terms of performance. The triangulation cost imposes a significant overhead compared to traditional scene rendering methods, though it scales far better with scale and scene detail. For that quality of rendering, making it viable requires a good amount of memory bandwidth and streaming speeds only possible with modern SSDs.
So it’s only really practical because GPUs have the power to render games with a certain level of fidelity, and RAM and SSD size and speeds for consumer gear are becoming capable of it.
Also there are significant benefits for a developer, especially if using photogrammetry or off-the-shelf high-detail models like Quixel scans, so there’s a reason Epic is going all-in.
- software rasterization for small single pixel triangles which reduces quad overdraw
- deferred materials (only material IDs and some geometry properties are written in the geometry pass to the gbuffers, which things like normal maps, base colour, roughness maps, etc being applied later with a single draw call per material)
- efficient instancing and batching of meshes and their mesh patches to allow arrows of objects to scale well as object count grows
- (edit, added later as I forgot) various streaming and compression techniques to efficiently stream/load at runtime and reduce runtime memory usage and bandwidth like vertex quantization etc.
These claims aren't isolated. I think it's a bad look for Google to repeatedly make mistakes when they are currently under multiple investigations for monopolistic behavior.
I've been working on an automatic sky tracking telescope over the past few months. I'm a few weeks behind on blogging but making solid progress. V1 is nearing completion. Then I want to rework some of the electronics to design and get a custom PCB printed. Also the physical design needs a complete redesign to make it more sturdy for long exposures and solve some wiring pains.
The software allows the platform to automatically align to north and working on accounting for imperfect leveling (such as placing it on a slanted surface) through software and accelerometers.
Next challenges I want to solve in software is focus detection and then automatic image stack and post processing.
Primary goals of the project is a deep dive into robotics and electronics, along with brushing up on webdev which I don't touch too frequently being in the gamedev world. Also allowing me to explore things like digital signal processing.
Sure. That's why Brazil fled to Bluesky instead of Threads when Twitter was blocked. It's probably successful among boomers and whoever already has another account with meta.
I'm not sure how that's entirely relevant. Success of another platform doesn't imply failure of another platform. It's also relatively common for different regions of the world to settle into different social media networks and messaging systems. See: Whatsapp vs iMessage, VKontakte, WeChat, telegram and so on.
There's plenty of metrics to support the fact that Threads is a successful launch.
More like 175M Threads users alone. They have already been set up for success given Meta's userbase. Now they have to herd that demographic among their different apps, so it was only logical to have a Twitter alternative in order to retain their usebase and profit from Elon's task oriented leadership that has materialized in Xwitter shedding users.
https://apnews.com/article/anthropic-copyright-authors-settl...