That reminds me of this talk[0] by Gil Tene called "How NOT to Measure Latency" at the Strangeloop conference in 2015 (or read this blog post[1] that contains the most important points).
Author here. That was a great article, thanks for sharing. Especially the part about how your probability of experiencing a p99 latency is much higher than you'd intuit.
I don't agree with all of it, but definitely a few points made directly or indirectly hit home, such as:
- there is no single metric that can accurately represent "latency"
- most of our metrics are misleading in what they unconsciously include or exclude
I can remember once looking at a graph of requests/second and wishing I could see a distribution of requests per millisecond within an individual second. That level of detail is hard to come by, so in the meanwhile, we do what we can with the data we have.
If you have individual request logs with timing infomation, you could construct that afterwards. It does take some effort to have an effective way of displaying these metrics. Where would you put an individual request that took 532ms and started at t=34.682s? Would you align all requests that started in the 34th second at t=34s, or look at completion time (ie within t=35s)?
Would you rather see "number of requests started at this ms" (you seem to suggest this), or is something else more interesting?
I think a sort of Gantt chart that plots duration of requests as well as starting time within the time span (e.g. a second or more) might be very informative. Each individual request on a different position on the Y axis, time on the X axis. Perhaps you have some bound on requests in flight, that could be the height of the Y axis, so you can easily see calm or busy periods.
At least our observability stack doesn't show this level of detail, but it would be very interesting to have it. (We do have calculated heatmaps based on maximum request time in Grafana, which is at least better than plots of average request times)
Good question - the question of whether to log the millisecond when the request starts or ends is a great example of how complex these things are to think about accurately, let alone capture.
I'd want to log when the requests start, as I'm mostly concerned with how well-distributed request arrival was at that level of granularity.
I wondered if the network layers in between my client and server were effectively "smoothing" request arrival across each second, or if instead requests were very bursty so that a per-minute spike in a typical graph was dominated by a few seconds or milliseconds within that minute.
[2020], and written for IOCCC: The International Obfuscated C Code Contest.
This was awarded "Best of Show - abuse of libc" at the time[0]. See also the judges' remarks[1]:
This program consists of a single printf(3) statement wrapped in a while loop. You would not think that this would amount to much, but you would be very, very wrong. A clue to what is happening and how this works is encoded in the ASCII art of the program source.
Related thread from 11 days ago: https://news.ycombinator.com/item?id=47067395 "What years of production-grade concurrency teaches us about building AI agents", 144 points, 51 comments.
Agree. I'll catch up on group chats that do not require immediate attention when it suits me, not when the stream of messages happens to arrive.
As for OP: read up on alert fatigue; if a notification isn't directly actionable, you shouldn't even see it!
The pull model for information is more durable for humans than the push model. Try RSS for news/blogs, take some time (preferably offline) each week to prepare for the important events in the upcoming week(s), write them down on something you pass by every day (such as a whiteboard near your front door).
It seems your project is at a really early stage. Almost none of the links on the page work, which is too bad, because it could have provided more background information on your goals and wishes. The only thing that seems to work is login through Google, which is a bit much for a demo site.
What's going to be the edge above the already excellent https://bgp.tools ?
You could have a _somewhat_ static blog and incorporate something like Webmentions[0] for comments or replies. For example, Molly White's microblog[1] shows the following text below the post:
Have you responded to this post on your own site? Send a webmention[0]! Note: Webmentions are moderated for anti-spam purposes, so they will not appear immediately.
I find this method to be a sweet spot between generating content on your own pace, while allowing other people to "post" to your website, but not relying on a third-party service like Disqus.
I found this document[0] very insightful. It's quite a long read, but gradually introduces the concepts needed for double-entry bookkeeping.
I think the main advantage is that you can granularly keep track of the movement of money, stocks, commodities, etc., and their conversions. As a day-to-day example, it gives you the ability to follow, for example, invoices received (Liabilities or Accounts Payable), transactions on a bank account (Assets), and what you are going to spend (or at some point, have spent) (Expenses).
This separation allows you to, for example, enter an invoice you've received on January 1 in Accounts Payable, with a corresponding value in Expenses. At this moment, nothing happened yet, it's simply an administrative transfer of some amount from an asset account to an expenses account (the sum of these transfers must be zero, so one amount is negative whereas the other amount is positive. See [0] for more details).
As a result, this gives you insight in what still needs to be paid. Once a transaction for that invoice enters your bank account on, for example, January 10, it gets "paid" to Accounts Payable, thus giving you a link between an invoice, its payment, and finally the amount spent. (This concept also works the other way around, see this sibling comment[1], where it's also extended into working with multiple accounts.)
I do like how it has some brutalist web design elements. With regards to drop-shadow colours on the /symbols page, it doesn't provide additional structure, so I would choose either none or a grayscale tint. Or, if you prefer colours, choose as many distinct colours as there are categories, such that they provide that additional structure.
The symbols on that page could be a bit bigger, though, as they are the main subject. (I changed 1.125rem to something like 1.6rem for text-lg; that works, but it could get a bit crowded with the clickable arrow on lower resolution screens).
I'm not a huge fan of things that move; the offset of a block of symbols, as well as scaling of an individual symbol block when hovering seems a bit too much. I would do either, but not both.
[0] https://www.youtube.com/watch?v=lJ8ydIuPFeU
[1] https://bravenewgeek.com/everything-you-know-about-latency-i...
reply