Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Now reuse the same connection to request the nested data, which can all have more nested links in them, and so on.

This still involves multiple round-trips though. The approach laid out in the article lets you request exactly the data you need up-front and the server streams it in as it becomes available, e.g. cached data first, then data from the DB, then data from other services, etc.



When you have an HTTP/2 connection already open a 'round-trip' is not really a gigantic concern performance-wise. And it gives the client application complete control and ver what nested parts it wants to get and in what order. Remember that the article said it's up to the server what order to stream the parts? That might not necessarily be a good idea on the client side though. It would probably be better for the client to decide what it wants and when. Eg, it can request the header and footer, then swap in a skeleton facade in the main content area, then load the body and swap it in when loaded.


Round trips for parallel requests work fine over HTTP/2. (As long as there aren't vast numbers of tiny requests, for example every cell in a spreadsheet).

However, sequentially-dependent requests are about as slow with HTTP/2 as HTTP/1.1. For example, if your client side, after loading the page, requests data to fill a form component, and then that data indicates a map location, so your client side requests a map image with pins, and then the pin data has a link to site-of-interest bubble content, and you will be automatically expanding the nearest one, so your client side requests requests the bubble content, and the bubble data has a link to an image, so the client requests the image...

Then over HTTP/2 you can either have 1 x round trip time (server knows the request hierarchy all the way up to the page it sends with SSR) or 5 x round trip time (client side only).

When round trip times are on the order of 1 second or more (as they often are for me on mobile), >1s versus >5s is a very noticable difference in user experience.

With lower latency links of 100ms per RTT, the UX difference between 100ms and 500ms is not a problem but it does feel different. If you're on <10ms RTT, then 5 sequential round trips are hardly noticable, thought it depends more on client-side processing time affecting back-to-back delays.


> When round trip times are on the order of 1 second or more (as they often are for me on mobile)

For an already-open HTTP/2 connection? Or for a new connection for each request?


Assuming a stable connection, there is no meaningful performance difference between a request/response round-trip from the client to the server, and a response streamed from the server to the client, amortized over time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: