One of my best commits was removing about 60K lines of code, a whole "server" (it was early 2000's) with that had to hold all of its state in memory and replacing them with about 5k of logic that was lightweight enough to piggyback into another service and had no in-memory state at all. That was pure a algorithmic win - figuring out that a specific guided subgraph isomorphism where the target was a tree (directed, non cyclic graph with a single root) was possible by a single walk through the origin (general) directed bi-graph while emitting vertices and edges to the output graph (tree) and maintaining only a small in-process peek-able stack of steps taken from the root that can affect the current generation step (not necessarily just parent path).
I still remember the behemoth of a commit that was "-60,000 (or similar) lines of code". Best commit I ever pushed.
Those were fun times. Hadn't done anything algorithmically impressive since.
I’m a hobby programmer and lucky enough to script a lot of things at work. I consider myself fairly adept at some parts of programming, but comments like these make it so clear to me that I have an absolutely massive universe of unknowns that I’m not sure I have enough of a lifetime left to learn about.
I want to believe a lot of these algorithms will "come to you" if you're ever in a similar situation; only later will you learn that they have a name, or there's books written about it, etc.
But a lot is opportunity. Like, I had the opportunity to work on an old PHP backend, 500ms - 1 second response times (thanks in part to it writing everything to a giant XML string which was then parsed and converted to a JSON blob before being sent back over the line). Simply rewriting it in naive / best practices Go changed response times to 10 ms. In hindsight the project was far too big to rewrite on my own and I should have spent six months to a year trying to optimize and refactor it, but, hindsight.
This is my experience, and my favorite way to learn: go in blindly, look things up when you get stuck/run out of ideas. I think it forces a deeper understanding of the topic, and really helps things "stick". I assume it's related to the massive dumps of dopamine that come from the "Eureka!" moments.
I've "invented" all sorts of old patents, all sorts of algorithms, including the PID algorithm. I think it helped form a very practically useful "intuition".
But, I've noticed that some people are passionately against this type of self exploration.
Yes. I invented the Trie data structure when I was 19. It was very exciting finding out it had a name, and it was indeed considered a good fit for my use case.
Thats so funny, I had the exact same experience. And when I was 16 I "invented" csv's because I was too lazy to setup SQL for my discord bot. I like to think I've gotten better at searching for the correct solution to things rather than just jumping in with my best guess.
I was just going to say the same— LLMs are great for giving a name to a described concept, architecture, or phenomenon. And I would argue that hallucinations don't actually much matter for this usage as you're going to turn around and google the name anyway, once you've been told it.
Read some good books on data structures and algorithms, and you'll be catching up with this sort of comment in no time. And then realise there will always be a universe of unknowns to you. :-) Good luck, and keep going.
do try (so you get the joy of 'small' wins), also do know that it's untouchable (so you don't despair when you don't master quantum mechanics in one lifetime)
(More than?) half of the difficulty comes from the vocabulary. It’s very much a shibboleth—learn to talk the talk and people will assume you are a genius who walks the walk.
That! It took me a while to start. My education of graph theory wasn't much better than your average college grad. But I found that fascinating and started reading. I was also very lucky to have had two great mentors - my TL and the product's architect, the former helped me to expend my understanding of the field.
A lot if it is just technical jargon. Which doesn't mean it's bad, one has to have a way to talk about things, but the underlying logic, I've found, is usually graspable for most people.
It's the difference between hearing a lecture from a "bad" professor in Uni and watching a lecture video by Feynman, where he tries to get rid of scientific terms, when explaining things in simple terms to the public.
As long as you get a definition for your terms, things are manageable.
You could've figured out this one with basic familiarity with how graphs are represented, constructed, and navigated, and just working through it.
One way to often arrive at it is to just draw some graphs, on paper/whiteboard, and manually step through examples, pointing with your finger/pen, drawing changes, and sometimes drawing a table. You'll get a better idea of what has to happen, and what the opportunities are.
This sounds "Then draw the rest of the owl," but it can work, once you get immersed.
Then code it up. And when you spot a clever opportunity, and find the right language to document your solution, it can sound like a brilliant insight that you could just pull out of the air, because you are so knowledgeable and smart in general. When you actually had to work through that specific problem, to the point you understood it, like Feynman would want you to.
I think Feynman would tell us to work through problems. And that Feynman would really f-ing hate Leetcode performance art interviews (like he was dismayed when he found students who'd rote-memorize the things to say). Don't let Leetcode asshattery make you think you're "not good at" algorithms.
I despise leetcode interviews. These days, with coding LLMs, I see them as even less relevant than they were before.
Yet, you ask someone "how do you build an efficient LFU" and get blank stares (I just LOVE the memcache solution of regions and probabilistic promotion/demotion).
I deleted an entire micro service of task runners and replaced it with a library that uses setTimeout as the primitive driving tasks from our main server.
It’s because every task was doing a database call but they had a whole repo and aws lambdas for running it. Stupidest thing I’ve ever seen.
> I deleted an entire micro service of task runners and replaced it with a library that uses setTimeout as the primitive driving tasks from our main server.
Your example raises some serious red flags. Did it ever dawned upon you that the reason these background tasks were offloaded to a dedicated service might have been to shed this load from your main server and protect it from handling sudden peaks in demand?
These background tasks are all database calls. That means the cpu is just waiting on the database for the majority of the call. Most modern servers can handle 10k of these calls concurrently. And you can do this off of one not so powerful CPU. Even half a cpu can handle this. Of course it depends on the CPU but you get my point.
The database is the bottleneck. The database is the thing that needs to be scaled first before you scale servers. This is the most common web application pattern. One way is providing more compute to the database (sharding is better then increasing cpu power as the bottleneck in the database is usually filesystem access not cpu power). Another way is to have a queue buffer the traffic spikes. Both of these are addressing an issue with the database first.
In most web apps. All the server does is wait for a database. The database is doing compute. You never want the server to do compute as that becomes what we call a “blocking call.” These blocking calls are the ones you offload to an external service as these calls “block” entire cpu threads. database calls do not “block” as the server will context switch to another green thread during database calls.
If you work somewhere where you’re scaling crud servers but not after scaling a central database it usually means you’re in a company that doesn’t get it and overemphasizes on “architecture” over common sense. It’s actually extremely common in lower tier small companies to have not so smart people build things like this that don’t make any sense. They aren’t thinking coherently and I’ve seen tons of people who just miss this common sense notion.
I’ll be Frank. It’s stupid and defies common sense. It’s likely you are doing this? But it’s also extremely commonplace.
If you flatten both of your trees/graphs and regard the output as strings of nodes, you reduce your task to a substring search.
Now if you want to verify if the structures and not just the leave nodes are identical, you might be able to encode structure information into you strings.
I was thinking in terms of finding all subgraph isomorphisms. But this definitely is O(N) if all you need is one solution.
But then I thought about it even further and this reduces to sliding window problem. In this case you still need to travel to each node in the window to see if there’s a match.
So it cannot be that you traverse each node once. Not if you want to find all possible subgraph isomorphisms.
Imagine a string that is a fractal of substrings:
rrrrrrrrrrrrrrrrrrrrrrrrrrrr
And the other one:
rrrrrrr
Right? The sliding window for rrrrrrr will be 7 in length and you need to traverse that entire window every time you move it. So by that fact alone every node is traversed at least 7 times.
Hi I'm a mathematician with a background in graph theory and algorithms. I'm trying to find a job outside academia. Can you elaborate on the kind of work you were doing? Sounds like I could fruitfully apply my skills to something like that. Cheers!
Look into quantitative analyst roles at finance firms if you’re that smart.
There’s also a role called being an algorithms engineer in standard tech companies (typically for lower level work like networking, embedded systems, graphics, or embedded systems) but the lack of an engineering background may hamstring you there. Engineers working in crypto also use a fair bit of algorithms knowledge.
I do low level work at a top company, and you only use algorithms knowledge on the job a couple of times a year at best.
You can try to get a job at an investment bank, if you're okay with heavy slogging, i.e., in terms of hours, which I have heard is the case, although that could be wrong.
I heard from someone who was in that field, that the main qualification for such a job is analytical ability and mathematics knowledge, apart from programming skills, of course.
That was about 20 years ago. Not much translates to today's world. I was in the algorithms team working on a CMDB product. Great tech, terrible marketing.
These days it's very different, mostly large-ish distributed systems.
I would love a little more context on this, cause it sounds super interesting and I also have zero clue what you’re talking about. But translating a stateful program into a stateless one sounds like absolute magic that I would love to know about
He has two graphs. He wants to determine if one graph is a subset of another graph.
The graph that is to be determined as a subset is a tree. From there he says it can be done in an algorithm that only traverses every node at most one time.
I’m assuming he’s also given a starting node in the original graph and the algorithm just traverses both graphs at the same time starting from the given start node in the original graph and the root in the tree to see if they match? Standard DFS or BFS works here.
I may be mistaken. Because I don’t see any other way to do it in one walk through unless you are given a starting node in the original graph but I could be mistaken.
To your other point, The algorithm inherently has to also be statefull. All traversal algorithms for graphs have to have long term state. Simply because if your at a node in a graph and it has like 40 paths to other places you can literally only go down one path at a time and you have to statefully remember that node has another 39 paths that you have to come back to later.
I oversimplified the problem :). Really it was about generating an isomporhic-ish view, based on some user defined rules, of an existing graph, itself generated by a subgraph isomorphism by a query language.
Think a computer network as a graph, with various other configuration items like processes, attached drives, etc (something also known as a CMDB). Query that graph to generate a subgraph out of it. Then use rules to make that subgraph appear as a tree of layers (tree but in each layer you may have additional edges between the vertices) because trees are efficient, non-complex representation on 2d space (i.e. monitors).
However, a child node in that tree isn't necessarily connected directly to the parent node. E.g. one of the rules may be "display the sub network and the attached drives in a single layer", so now the parent node, the gateway, has both network nodes (directly connected to it) and attached drives (indirectly connected to it) as direct descendants.
Extend this to be able to connect through any descendant, direct or indirect (gateway -> network node -> disk -> config file -> config value - but put the config value on the level of the network node and build a link between them to represent the compound relationship).
Walk through the original subgraph while evaluating the rules and build a "trace back" stack to let you understand how to build each layer even in the presence of compound links while performing a single walkthrough instead of nm (original vertices rules for generation).
As I said, that was a lot of fun. I miss those days.
The target being a tree is irrelevant right? It’s the “guided” part that makes a single walk through possible?
You are starting at a specific node in the graph and saying that if there’s an isomorphism the target tree root node must be equivalent to that specific starting node in the original graph.
You just walk through the original graph following the pattern of the target tree and if something doesn’t match it’s false otherwise true? Am I mistaken here? Again the target being a tree is a bit irrelevant. This will work for any subgraph as long as as you are also given starting point nodes for both the target and the original graph?
I advise checking out the users other comments before jumping to conclusions. Doesn't look AI generated to me, rather just an "individual" writing style. Only because it's possible doesn't mean its true. Maybe user can confirm?
Otherwise just downvote or flag I guess, but this comment of yours just reads as an insult to a person that maybe did not put the most effort into writing their comment, but seems genuine to me at least.
I've worked on a product that reinvented parts of the standard library in confusing and unexpected ways, meaning that a lot of the code could easily be compacted 10-50 times in many place, i.e. 20-50 lines could be turned into 1-5 or so. I argued for doing this and deleting a lot of the code base, which didn't take hold before me and every other dev left except one. Nine months after that they had deleted half the code base out of necessity, roughly 2 MLOC to 1 MLOC, because most of it wasn't actually used much by the customers and the lone developer just couldn't manage the mess on his own.
I still remember the behemoth of a commit that was "-60,000 (or similar) lines of code". Best commit I ever pushed.
Those were fun times. Hadn't done anything algorithmically impressive since.