> to utilize the “O(M) guarantee”, one must know how many objects will be freed
No -- unless I'm misunderstanding you(?), this is incorrect. The following pseudocode (I'll just use Python syntax for simplicity) has no idea how many objects are being deleted:
while True:
print("[Progress] Deleting: " + node.name)
prev = node
node = next(node)
del prev # Assume this happens M times
Assume each node owns O(1) blocks of memory.
With GC, the user can see random arbitrarily-long stutters, depending on the shape of the rest of the object graph, and the phase of the moon.
With RC, you can rely on the user seeing your progress indicators consistently... without knowing or caring what M was.
> to utilize “you can access all the other objects in parallel”, one must know that two objects are not reachable from each other, which is not always the case e.g. objs=find(scene,criteria)
Again, this is false, and (subtly) misses the point.
The claim wasn't "you can access all objects in parallel". That's not true in any system, be it GC or RC.
The claim is "you can still access some objects in parallel" with RC. The crucial difference here is that, under a GC, all threads are at risk of getting throttled because of each other arbitrarily. They simply cannot do any work (at least, nothing that allocates/frees memory) without getting throttled or interrupted at the whim of the GC.
> With RC, you can rely on the user seeing your progress indicators consistently... without knowing or caring what M was.
What does that mean? How is that helpful for the user to know you've free-d 56,789 objects so far if he doesn't know if there's 60k in total or 600k?
> The crucial difference here is that, under a GC, all threads are at risk of getting throttled because of each other arbitrarily. They simply cannot do any work (at least, nothing that allocates/frees memory) without getting throttled or interrupted at the whim of the GC.
And that's also the case for all thread which share the same RC-d object. They will all be throttled when the memory is freed.
The biggest benefit of RC is how it seamlessly interact with native code, that's why it's a great fit for languages like Obj-C and Swift or Rust (opt-in), but in terms of performance, be it latency or throughput, it's not a particularly good option (the trade off it makes is comparable to copying GC but with higher max latency than copying GC and even lower throughput, and doesn't have fast allocation and memory compaction as a side benefit).
No -- unless I'm misunderstanding you(?), this is incorrect. The following pseudocode (I'll just use Python syntax for simplicity) has no idea how many objects are being deleted:
Assume each node owns O(1) blocks of memory.With GC, the user can see random arbitrarily-long stutters, depending on the shape of the rest of the object graph, and the phase of the moon.
With RC, you can rely on the user seeing your progress indicators consistently... without knowing or caring what M was.
> to utilize “you can access all the other objects in parallel”, one must know that two objects are not reachable from each other, which is not always the case e.g. objs=find(scene,criteria)
Again, this is false, and (subtly) misses the point.
The claim wasn't "you can access all objects in parallel". That's not true in any system, be it GC or RC.
The claim is "you can still access some objects in parallel" with RC. The crucial difference here is that, under a GC, all threads are at risk of getting throttled because of each other arbitrarily. They simply cannot do any work (at least, nothing that allocates/frees memory) without getting throttled or interrupted at the whim of the GC.