You're not going to like the answer, but I think it captures some of what you're getting at.
Windows 95. Old style gui programming meant sitting in a loop, waiting for the next event, then handling it. You type a letter, there's a case switch, and the next character is rendered on the screen. Being able to copy a file and type at the same time was a big deal. You'd experience the dead letter queue when you moved a window while the OS was handling a device, and the window would sort of smear across the screen when the repaint events were dropped.
Concurrent programming is hard. State isolation from micro services helps a lot. but eventually you'll need to share state, and people try stuff like `add 1 to x`, but that has bugs, so they say, `if x == 7 add 1 to x` but that has bugs so they say, `my vector clock looks like foo. if your vector clock matches, add 1 to x, and give me back your updated vector clock` but now you've imposed a total order and have given up a lot of performance.
I'm blind to the actual problem you're facing. My default recommendation is to have a monorepo, and break out helpers for expensive tasks, no different than spinning up a thread or process on a big host. Have a plan for building pipelines a->b->c->d. also have a plan for fan out a->b & a->c & a->d
It has been widely observed there are no silver bullets. but there are regular bullets. Careful and thoughtful use can be a huge win. If you're in that exponential growth phase, it's duck tape and baling wire all the way, get used to everything being broken all the time. if you're not, take your time and plan out a few steps ahead. Operationalize a few micro services. Get comfortable with the coordination and monitoring. Learn to recover gracefully, and hopefully find a way to dodge that problem next time around.
Sorry this is hand wavy. I don't think you're missing anything. it's just hard. if you're stuck because it won't fit on 1X anymore, you've got to find a way to spread out the load.
I fully agree with this, also that's still quite common in the embedded world.
The user presses a button which sets a hardware event flag. CPU wakes up from sleep, checks all event flags, handles them, clears the interrupt bits and goes back to sleep.
But using events like this requires a very tight integration between event producer and consumer, so I don't think this will translate well to distributed systems or microservices.
There is an example where highly distributed, event-driven systems are used. Everyday we are using them. It the cars.
In a car there are many distributed ECUs(1) that communicate in an event driven system. It was tried to use cyclic communication. But all those attempts failed in the long run. Because cyclic communication would required that all the independent ECUs are synced to each other, which is a very hard problem. That is the reason why everybody moved away from cyclic communication.
That said to help the development in automotive a middleware have been developed and used to help the development of such event driven systems. You develop your functions and define how the signals are routed between those functions. The someone later/independent decides on how the functions are distributed on the different ECUs depending the available resources. The middleware then takes care of the correct routing of the signals. Everything is event driven.
Automotive ECUs use cyclic communication and are event driven. ECUs send their signals over CAN at a defined message rate, regardless of whether the state has changed. Other ECUs set up their hardware to monitor the signals they care about, and trigger a hardware interrupt when the signal is received.
ECUs do both cyclic and event-driven, but typically the software logic is event driven, the communication is cyclic.
There's cyclic communication by sending messages over CAN (if reliability doesn't matter that much) or FlexRay (if reliability matters), so if you examine the network, you'll typically see the same PDUs repeat in a cyclic manner. But an individual ECU will handle the bulk of its logic in an event-driven way. An interrupt will trigger for the incoming CAN frame (and that's an event-driven thing!) but a couple of layers up it will most likely just set a flag for what changed, and it will actually be taken care on a logical level when the relevant task's loop runs next time.
> […] and trigger a hardware interrupt when the signal is received.
That is the definition of event driven.
The same is not only true for the CAN-based, and CAN FD-based communication, but also for the Automotive Ethernet-based communication.
There were tries with Time-triggered protocols like TTP/C (used by Daimler for exactly one model) and FlexRay which had cyclic communication. The communication cycle required that the ECUs are synced to the communication cycle because they needed to have to correct data available at exactly their time slot. If they missed the time slot, the data got marked as stale. The same problem on the receiving side.
Electronics can be more trouble than they're worth in cars. If they made modern cars with no electronics in the engine or controls, maybe I'd buy new cars. It's frustrating when my old Cherokee won't start because the antitheft system is having some kind of conniption for example, and I have to go through this ritual of locking and unlocking the doors, reconnecting the battery with the key in the ignition, flipping this mystery switch the previous owner has no clue about but seems to contribute somehow. There's a process to disable this system by buying a new ECU or something but I haven't gotten around to it.
I exclusively drive modern cars, and have never experienced anything along the lines of what you describe. So maybe the problem isn't with modern cars, but old Cherokees ;)
> But using events like this requires a very tight integration between event producer and consumer, so I don't think this will translate well to distributed systems or microservices.
It's good to hear from an embedded dev. Embedded is so under-represented that people love to argue about things like low level devs don't use them every day.
partially. also, the hardware is generally old. really old. Imagine hardware from the 00's, and there's a good chance that's what's inside the last ATM you used.
Windows 95. Old style gui programming meant sitting in a loop, waiting for the next event, then handling it.
The GUI worked in a single thread, but your whole program didn't need to do that.
Being able to copy a file and type at the same time was a big deal.
That's not correct. You just needed to create a separate thread for the file operation. For some programmers that was a big deal indeed, the same could be said for some programming tools. But that wasn't a general case at all.
There were some ugly things, like the way file operations were treated down the OS level, but it wasn't impossible to make your application responsive. It wasn't even difficult... if you knew how to do it.
I am not sure about Windows95, but the MSDOS legacy is single threaded, so you did not have the luxury of having the abstraction represented for you by the operating system.
Some people wrote very inventive software to get around it.
In Windows there is a fundamental concept of the message loop.
The message loop is an obligatory section of code in every program that uses a graphical user interface under Microsoft Windows.
Windows programs that have a GUI are event-driven.
Note that WindowsNT is fully multithreaded, but there is a
thread sitting there listening to the event loop still.
(of course other threads can do other things).
OP is talking about what you could do using the built in Windows utilities, not what is possible if you write your own file copy-and-note taking utility (which some people did, like Borland Sidekick.)
I guess what you call "built in Windows utilities" is making a call to the GUI shell. It's actually a very complex inter-process communication that happened to be a one-liner from VB and showed that fancy flying folders animation. Not very flexible though.
Writing "my own" code for file copy was not really so low-level, it's what most people understood as programming at the time. Locating the file, opening it, using a descriptor and a loop to copy blocks through a buffer, closing the file, managing errors... that kind of thing.
If you wanted to do the file operation in the background and keep using the GUI for input, you did need to create a separate thread. But that wasn't some black art feat, you just read a book or searched Altavista for a snippet, if in a hurry.
> Have a plan for building pipelines a->b->c->d. also have a plan for fan out a->b & a->c & a->d
How does that look like? Do you mean to learn how this is done with the CI of choice, create helper functions or are there concrete steps that you would recommend? I'm new to this and would appreciate any feedback.
Ohh you talk about call pipelines, I totally missed the point. You are talking about time-constrained systems where an answer in the timeframe is required, right? Otherwise the timeout does not make sense, or would you do such precise timeouts even in non-time-constrained systems?
Anyway, seems to make sense to plan for this timing-wise. Allows addressing and seeing performance bottlenecks. Thank you for spelling it out for me!
Aren't most "events" just an abstraction that's actually implemented by polling and/or an event queue at some level anyway, be it a library or the kernel?
As someone that worked at the company that taught Pete Hunt, and other early contributors to react this trick, I wouldn't call it a benefit, but the only workable solution to acceptable single page web application performance in an otherwise terrible development environment (browser javascript).
Proper concurrency support and real mutexes would be way better than having a single execution thread for all your computations that is shared with all the visual and human interaction computations. Plus, the data sharing models between the main thread and web workers is pretty crap for anything serious, so it's not even easy to get computations out of the main thread that don't need to be there.
I don’t think I agree. I can’t think of many tasks I’ve ever tried to solve in programming where moving the work to another thread was the right answer. Usually things are too entangled, or too fast to bother. And there’s so much overhead in threading - from thread creation and destruction, message passing with mutexes or whatever and you need to figure out how to share data in a way that doesn’t accidentally give you lower overall performance for your trouble. I’d rather spend the same time just optimising the system so it doesn’t need it’s own thread in the first place.
The mostly single threaded / async model is a hassle sometimes. But I find it much easier to reason about because I know the rest of my program has essentially paused during the current callback’s execution.
I've also found few use cases for workers in the browser, but as the parent pointed out that's partly because of the limited and sluggish communication between threads in javascript. Just to offer an opposite perspective, serverside, almost any modestly heavy processing I do with nodejs happens in worker threads. These are pooled so they don't need to spin up. Blocking node's event loop is never an option. Fine to access a big dataset with an async call, but if you need to crunch that data before passing it to the client you pretty much have to use workers. [edit] obviously assuming you're not calling some command line process, and you're trying to chew though some large rendering of data using JS.
Unless something has changed with web assembly since I last did browser development, doesn't it still compile down to javascript in the same execution environment with those limitations?
I'm not very familiar with concurrency at a lower level, so I can't comment on your complaints. What Clojurescript provides is considerably better abstractions that in turn get you correct implementations with much less incidental complexity or hassle.
Not sure why my sincere question about the state of browser execution environment got downvoted, but the general concerns I raised in my first comment weren't about incidental complexity or hassle. They matter and it's great that clojurescript helps with those two. The concern I voiced was about maintaining realtime performance of a responsive user interface. The key issue is that you have a single event loop that all visual and human interactions need to be handled by because that's the only thread that can interact with the DOM. It's just way too easy to block that thread and cause the app to become unresponsive.
> The key issue is that you have a single event loop that all visual and human interactions need to be handled by because that's the only thread that can interact with the DOM.
Isn't that normal? A single event/drawing loop. Would there be an advantage to two threads drawing? Sounds pretty novel.
Windows 95. Old style gui programming meant sitting in a loop, waiting for the next event, then handling it. You type a letter, there's a case switch, and the next character is rendered on the screen. Being able to copy a file and type at the same time was a big deal. You'd experience the dead letter queue when you moved a window while the OS was handling a device, and the window would sort of smear across the screen when the repaint events were dropped.
Concurrent programming is hard. State isolation from micro services helps a lot. but eventually you'll need to share state, and people try stuff like `add 1 to x`, but that has bugs, so they say, `if x == 7 add 1 to x` but that has bugs so they say, `my vector clock looks like foo. if your vector clock matches, add 1 to x, and give me back your updated vector clock` but now you've imposed a total order and have given up a lot of performance.
I'm blind to the actual problem you're facing. My default recommendation is to have a monorepo, and break out helpers for expensive tasks, no different than spinning up a thread or process on a big host. Have a plan for building pipelines a->b->c->d. also have a plan for fan out a->b & a->c & a->d
It has been widely observed there are no silver bullets. but there are regular bullets. Careful and thoughtful use can be a huge win. If you're in that exponential growth phase, it's duck tape and baling wire all the way, get used to everything being broken all the time. if you're not, take your time and plan out a few steps ahead. Operationalize a few micro services. Get comfortable with the coordination and monitoring. Learn to recover gracefully, and hopefully find a way to dodge that problem next time around.
Sorry this is hand wavy. I don't think you're missing anything. it's just hard. if you're stuck because it won't fit on 1X anymore, you've got to find a way to spread out the load.