I can really only describe my personal experience adequately vs the X hype, though there are enough influential/experienced devs out there who aren't part of the LLM industry who are saying they are having great success that I have to think that, that amount of smoke must mean fire.
Some background, I'm a "working manager" in that I have some IC responsibilities as well as my management duties, and I'm pretty good at written communication of requirements and expectations. I've also spent a number of years, reading more code than I write, and have a pretty high tolerance for code review at this point. Finally, I'm comfortable with the shift from my value being what I create, to what I help others create.
TLDR: Agentic coding is working very well for me, and allows me to automate things I would have never spent the time on before, and to build things that the team doesn't really have time to build.
Personally, I started testing the waters seriously with agentic coding last June, and it took probably 1-2 months of explicitly only using it with the goal of figuring out how to use it well. Over that time, I went from a high success rate on simple tasks, but mid-to-low success rate on complex tasks to generally a high success rate overall. That said, my process evolved a LOT. I went from simple prompts that lacked context, to large prompts that had a ton of context where I was trying to one-shot the results, to simple prompts, with a lot of questions and answers to build a prompt to build a plan to execute on.
My current process is basically, state a goal or a current problem, and ask for questions to clarify requirements and the goal. Work through those questions and answers which often makes me examine my assumptions, and tweak my overall goal. Eventually have enough clarity to have the agent generate a prompt to build a plan.
Clear out context and feed in that prompt, and have it ask additional questions if I have a strong feeling about direction and what I would personally build, if there's still some uncertainty that usually means I don't understand the space well enough to get a good plan, so I have it build instead with the intention of learning through building and throwing it away once I have more clarity.
Once we have a plan, have the agent break it down into prioritized user stories with individual tasks, tests, and implementation details. Read through those user stories to get a good idea of how I think I would build it so I have a good mental model for my expectations.
Clear out context and have the agent read in the user stories and start implementing. Early on in the implementation, I'll read 100% of the code generated to understand the foundation it's building. I'll often learn a few things, tweak the user stories and implementation plans, delete the generated code and try again. Once I have a solid foundation, I stop reading all the code, and start skimming the boilerplate code and focus only on the business rules / high complexity code.
I focus heavily on strong barriers between modules, and keeping things as stupidly simple as I can get away with. This helps the models produce good results because it requires less context.
Different models prompt differently. While the Opus/Sonnet family of models drive me nuts with their "personality", I'm generally better at getting good results out of them. The GPT series of models, I like the personality more, but kinda suck at getting good results out of them at this point. It takes some time to develop good intuition about how to prompt different models well. Some require more steering as to which files/directories to look in, others are great at discovering context on their own.
If the agent is going down a wrong path, it's usually better to clear context and reset than to try and steer your way out of screwed up context.
Get comfortable throwing away code, you'll get better results if you don't think of the generated code as precious.
The current job market will probably suck for the next 2-4 years honestly. Over time, people will leave the industry and find other career’s and the market won’t be as heavily impacted as it is now.
If possible, go to local meetups for whatever type of role you are in/interested in. The current environment while very different from the 2000’s dot com bubble bust, has certain similarities, and at that time, the only way to really find work was through relationships. I know at that time I ended up switching from being a software engineer to desktop support for about 6 months just to stay employed, especially since it was the only job available in my friend group.
This will be a U.S. centered response, because that’s where I live and work. We’ve tried hiring for local and remote roles. It’s a terrible experience all around, both on the hiring and being hired side of the equation.
The company I work for is a medium sized business, in residential and commercial construction. For example, a recent react native mobile dev position my company posted had about 300 applications in the first hour, with about 500 total in the first week on indeed. Of those applications, 90% didn’t have most of any of the requirements for the position. The job description says that we don’t sponsor H1B visa’s (because it’s stupidly expensive now). Of the 10% that somewhat met the minimum qualifications, all but 1 required sponsorship. This was listed as a hybrid role, only 20 people applied from the region where the office is.
We already know from previous roles that a huge percentage of people with resume’s that say they have the required skills, actually won’t come close to making it through the interview process.
While as a company we like AI/ML tools, and encourage our staff to learn them, and use them where appropriate, we want to invest in everyone’s skills with new tools. We try not to use AI where a human connection is important (hiring, sales, etc). We’ve had to resort to AI for dealing with the massive influx of low quality job applications and it sucks.
Basically anyone who goes above and beyond at this point automatically get’s at least an interview.
I do understand why so many people are just applying to every job that shows up, it makes sense. But it really does make the prospect of finding those few great people very difficult.
We aren’t a ruby/rails shop otherwise I’d reach out to OP.
I'm curious where your company is located. I am a native mobile developer, but have experience with Flutter and React Native applications. I don't require any sponsorship and am willing to relocate for the right role. If your company is still looking please reply here or my email ggenova79@gmail.com.
They are on the record about why they switched to a chromium based browser. It’s been a while, but if I’m remembering correctly, at the time Google was making changes to YouTube to make it actively slower, and use more power on IE. Microsoft realized that while they could compete as a browser, they couldn’t compete and fight google trying to do underhanded things to sabotage their browser.
It wasn’t slack, but I’ve had multiple vendors that I was in regular touch with, surprise me with pricing changes in the week(s) leading up to a contract renewal. Never quite this short notice, but definitely as little as 8 business days before the renewal was due.
Both times I’ve paid the new price for 1 year and cancelled. Both times our sales rep was surprised the next year when we didn’t renew.
In this case, it looks like Hack Club sat on a gargantuan bill for at least weeks and maybe months (see top comment on this post).
I'm not denying that what you describe happens, but in this case - ignoring the warning signs, letting the issue crash into a wall and then complaining online about it doesn't help anyone.
I get that regardless there were warning signs, but it honestly seems like slack either miscommunicated or flat out lied to them about the ability to address pricing. While in retrospect they should have started preparing to migrate away, it's human nature to assume good intentions and hope that things will work out well.
There's a couple of interpretations here.
1. The sales rep really thought they would be able to retain good pricing for them and it fell through, and at the last minute hackclub was blindsided by their inability to retain the pricing.
2. The sales rep thought that hackclub was likely to jump ship if they had time to plan based on the new pricing, and lied to them about the possibility of retaining pricing. And thought that by doing so they could force at least one year of higher cost.
3. Hack Club is misrepresenting their communications with Slack to drum up public approval.
My guess is that option 1 is the most likely, and the optimism of the sales rep ended up being a net negative, and human nature being what it is, Hack Club thought things would work out, and everyone is already busy so why borrow trouble.
As for complaining online, sadly it seems that bad press is the only lever that most people have as a forcing factor for companies these days. I honestly only had a Twitter account for a long time, just so I could complain about companies in public to get them to do the right thing, so unfortunately complaining online does actually help.
Some topics I end up needing to know a lot about despite lack of interest (looking at you UEFI), and so I learn until I can solve all the problems I’m having. Others I quickly pass up my needs and then continue with interest for a while (networking, routing, etc).
This assumes facts not in evidence. While the posted quote is sanitized, the assumption that the poster did the sanitization vs. copying from a sanitized source isn't necessarily supported.
Fair enough. But no need for the faux-legalese, it isn't clear whether the OP sanitised it or copied it that way. That changes nothing about my comment though, just who sanitised it.
I mean, there’s a chance it’s exactly what he said, “I didn't give it much thought at the time, but knew that I wanted the code to be available for people to learn from, and to make it easily auditable so users could validate claims I have made about the privacy and security of the platform.” … it doesn’t have to be some to be some sort of nefarious OSS altruism, it really could be, “maybe people would want to see how this works”… that ends up leading to … oh crap a bunch of people who have never contributed, and will never contribute, are hosting versions of what I created and taking money that I really would like to have to feed my family.
To be unfairly cynical here, the sentence you quoted sounds to me like "I chose to not have a front door. I didn't give it much thought at the time, but knew that I wanted my home to be available for people to learn from my interior design choices and decorations. Then I discovered that people walked in, started to eat out of my fridge, leave dirt everywhere and carry off some of my chairs, and it hurts".
The fault here lies not with the persons who use the maintainer's code exactly in line with the license, no matter what other _intentions_ he might have had.
Possibly, but that would be pretty damning. A license isn't something you should YOLO. If he is that laissez-faire about licensing the source code then what other important aspects of the project has he not given sufficient thought.
Misunderstanding or failing to predict the legal ramifications of choosing an extremely popular license is in no way an indicator of programming care or ability. They’re different sets of skills.
Also, something starts off as a nothingburger side project, so you make some decisions based on that. Then it develops a bit, and turns into something you care about and are able to turn into a business. What people want and expect changes over time, and a license on a codebase that is basically developed by one person, isn't a marriage.
Some background, I'm a "working manager" in that I have some IC responsibilities as well as my management duties, and I'm pretty good at written communication of requirements and expectations. I've also spent a number of years, reading more code than I write, and have a pretty high tolerance for code review at this point. Finally, I'm comfortable with the shift from my value being what I create, to what I help others create.
TLDR: Agentic coding is working very well for me, and allows me to automate things I would have never spent the time on before, and to build things that the team doesn't really have time to build.
Personally, I started testing the waters seriously with agentic coding last June, and it took probably 1-2 months of explicitly only using it with the goal of figuring out how to use it well. Over that time, I went from a high success rate on simple tasks, but mid-to-low success rate on complex tasks to generally a high success rate overall. That said, my process evolved a LOT. I went from simple prompts that lacked context, to large prompts that had a ton of context where I was trying to one-shot the results, to simple prompts, with a lot of questions and answers to build a prompt to build a plan to execute on.
My current process is basically, state a goal or a current problem, and ask for questions to clarify requirements and the goal. Work through those questions and answers which often makes me examine my assumptions, and tweak my overall goal. Eventually have enough clarity to have the agent generate a prompt to build a plan.
Clear out context and feed in that prompt, and have it ask additional questions if I have a strong feeling about direction and what I would personally build, if there's still some uncertainty that usually means I don't understand the space well enough to get a good plan, so I have it build instead with the intention of learning through building and throwing it away once I have more clarity.
Once we have a plan, have the agent break it down into prioritized user stories with individual tasks, tests, and implementation details. Read through those user stories to get a good idea of how I think I would build it so I have a good mental model for my expectations.
Clear out context and have the agent read in the user stories and start implementing. Early on in the implementation, I'll read 100% of the code generated to understand the foundation it's building. I'll often learn a few things, tweak the user stories and implementation plans, delete the generated code and try again. Once I have a solid foundation, I stop reading all the code, and start skimming the boilerplate code and focus only on the business rules / high complexity code.
I focus heavily on strong barriers between modules, and keeping things as stupidly simple as I can get away with. This helps the models produce good results because it requires less context.
Different models prompt differently. While the Opus/Sonnet family of models drive me nuts with their "personality", I'm generally better at getting good results out of them. The GPT series of models, I like the personality more, but kinda suck at getting good results out of them at this point. It takes some time to develop good intuition about how to prompt different models well. Some require more steering as to which files/directories to look in, others are great at discovering context on their own.
If the agent is going down a wrong path, it's usually better to clear context and reset than to try and steer your way out of screwed up context.
Get comfortable throwing away code, you'll get better results if you don't think of the generated code as precious.
reply