While the multiplier is less for me ( perhaps 3x or 4x ) I think the assumption that productivity gain leads to directly to more money is bit optimistic. Unless you are self employed, or run your own company, and actually get paid by results, being more efficient is seldom paid much. ( with luck you get promotion every two years or so or pay raise every year )
I have worked for too long in the field, but this year and simply thanks to the LLMs I have actually managed to get 4 usable hobby projects done ( as far as I need them to go anyway - personal tools that I use and publish but do not actively maintain unless I need some new feature ), and I have been quite productive with stack I do not normally use at our startup. Most previous years I have finished 0-1 hobby projects.
The ramp up period for new stack was much less, and while I still write some code myself, most of it at least starts as LLM output which I review and adjust to what I really want. It is bit less intellectually satisfying but a lot more efficient way to work for me. And ultimately for work at least I care more about good enough results.
Completely unrelated, but what’s with the spaces inside the parenthesis here? It’s super weird (and leads to incorrect text layout with a standalone parenthesis at the end or beginning of a line…)
I at least write blog mostly for its own sake. I do not even bother checking statistics about page views or such, there are no ads, and I am doing it simply because it makes ( public part of ) my journal have sometimes bit more thoroughly written down notes about things I am doing anyway.
Morale of the story is that always have SHA, and preferably with reverse vesting ( with some cliff ) so that if you leave early you get something but if almost immediately nothing.
Having said that, I would go with trying to improve the numbers - eg 5% of company as your remaining share ( that makes it still probably fine in investors pov ), or more money, depending on which you want.
Thanks for the feedback and pointing out ostiary. Fixing replay attacks is on my todo list, maybe I can learn some things from how ostiary does it.
Kind advice from my PoV:
Your comment could be read as "your project is shit, there is ostiary which has replay protection and yours doesn't".
I'm sure you didn't intend for you comment to not come across that way, and I also did not read it that way, but others could have.
Also keep in mind that ruroco is a very young project and is by no means finished. I was thinking about using one-time-pads or other encryption algorithms as well. I also posted this here to get feedback to improve my project.
So hopefully when I release version 1.0.0 all the issues that this project has atpit are resolved ;)
Ostiary prevents replay by salting. Client's reply is only valid for the unique salt that the server has generated and only for a short time and obviously only once.
A replay attack can only make the server do whatever the legit client intended it to do, just up to [timeout] seconds later.
the deadline that is sent from the client is being added to the blocklist after the command was executed, so sending the same packet again will not work, because the deadline (which is in nanoseconds) is already on the blocklist and therefore the command will not be executed again.
This effectively means that replaying a packet is not possible, because the server will deny it.
By freelancing you can save nice nest egg in most places. I did that for 11 years in Europe and now I work because I want to, not because I must. Disclaimer: not consulting anymore, I moved back to startup grind once more because I feel more connected to the work than what you do as a consultant.
I rather just monitor the memory usage and react ( = get alerts) if it goes too high. The performance impact of actually swapping is frequently more visible than occasional oomkill.
Nowadays it is the inverse. Is there ever a good reason to enable it? Most devices run on TLC or other fast wearing flash, and swapping there is both expensive in terms of durability loss, as well as still much slower than just having enough RAM.
I think my only device with swap is my Mac laptop and it is relatively conservative when it swaps, unlike Linux with default settings.
Yes. It's a rare system that it shouldn't be enabled.
RAM is a precious resource. It's highly likely programs will allocate RAM that won't be used for days at a time. For example, if you are using docker once the containers are started the management daemon does nothing. If you have ssh enabled only for maintenance it unlikely to be used for days if not weeks on end. If your system starts gettys they are unlikely to be used, ever. The bulk of the init system probably isn't used once the system is started.
All those things belong in swap. The RAM they occupied can be used as disk cache. Extra disk cache will speed up things you are doing now. Notice this means most of the posts here are wrong - just having swap actually gives you a speed boost.
One argument here is that disabling swap provides you an early warning system you need more RAM. That's true, but there is a saner option. Swap is only a problem if the memory stored there is being used frequently. How do you monitor that? You look at reads from swap. If the steady state of the system is showing no reads from swap, you aren't using what's there, so it can't possibly have a negative speed impact. But if swap is being used and isn't being read from, it's freed some RAM so it is having a positive speed impact.
One final observation: the metric "swap should be twice the size of RAM" isn't so helpful nowadays. There aren't a lot of programs that sit around doing nothing. Maybe 1GB or so, and it's more or less fixed regardless of what the system is doing. Old systems didn't have 2GB of RAM, so the "twice RAM" rule made sense. But now a laptop might have 16GB. If you are using 32GB of swap and not reading from it would be a very, very unusual setup. If you are reading from that 32GB or swap, you're system is RAM starved and will be dog slow. You need to add RAM until those reads drop to 0.
The best modern reason to have as much swap as RAM is to make hibernation to disk more reliable, but a lot of people don't use that anymore. It's more reliable because the kernel doesn't have to work as hard to find space to write the system image to.
> One final observation: the metric "swap should be twice the size of RAM" isn't so helpful nowadays
Remembered when I thought "if double is recommended, four times should be even better!". It was not.
Nowadays I don't use swap because I rarely run out of RAM, it sits there eating a few precious GB of SSD, largely unused. The rare cases when I run out of RAM have been buggy Steam games on Proton. In 2024, it has been only "The Invincible", and that game has reports of running out of memory on Windows too.
It depends a lot on the startup. I have similar number of startup experiences, and only one had early stage secondary sales ( but those were even for non founders ). Mainly money comes from IPO or other exit.
I have worked for too long in the field, but this year and simply thanks to the LLMs I have actually managed to get 4 usable hobby projects done ( as far as I need them to go anyway - personal tools that I use and publish but do not actively maintain unless I need some new feature ), and I have been quite productive with stack I do not normally use at our startup. Most previous years I have finished 0-1 hobby projects.
The ramp up period for new stack was much less, and while I still write some code myself, most of it at least starts as LLM output which I review and adjust to what I really want. It is bit less intellectually satisfying but a lot more efficient way to work for me. And ultimately for work at least I care more about good enough results.