Hacker Newsnew | past | comments | ask | show | jobs | submit | more coderjames's commentslogin

I worked with ASN.1 for a few years in the embedded space because its used for communications between aircraft and air traffic control in Europe [1]. I enjoyed it. BER encoding is pretty much the tightest way to represent messages on the wire and when you're charged per-bit for messaging, it all adds up. When a messaging syntax is defined in ASN.1 in an international standard (ICAO 9880 anyone?), its going to be around for a while. Haven't been able to get my current company to adopt ASN.1 to replace our existing homegrown serialization format.

[1] https://en.wikipedia.org/wiki/Aeronautical_Telecommunication...


Isn't PER or OER more compact? especially for the per-bit charging thing


Oh yeah, derp. I was thinking unaligned-PER, not BER.


of all the encoding i like BER the most as well

(i worked in telecommunications when ASN.1 was common thing)


> This friend told me she can't work without ChatGPT anymore.

It doesn't say she chooses to use it; it says she can't work without using it. At my workplace, senior leadership has mandated that software engineers use our internal AI chat tooling daily, they monitor the usage statistics, and are updating engineering leveling guides to include sufficient usage of AI being required for promotions. So I can't work without AI anymore, but it doesn't mean I choose to.


> those who want to parse a JSON document with a YAML parser.

I've done it. We already had a YAML parser in an internal library I maintain since we were already ingesting YAML files for other reasons, so when we later added new files for a different reason that someone decided should be in JSON instead, it was easier and cleaner to keep using the existing YAML parser we already had incorporated rather than add a separate JSON parser along side it.


It has not so far.

"Experts say that airdrops, another measure Israel announced, are insufficient for the immense need in Gaza and dangerous to people on the ground."[1]

"[T]he airdrops have an advantage over trucks because planes can move aid to a particular location very quickly. But in terms of volume, the airdrops will be 'a supplement to, not a replacement for moving things in by ground.'"[2]

The airdrops killed people when 1) the containers landed on occupied tents and, 2) containers landed in the water and people drowned attempting to retrieve the aid. Trucks can also delivery vastly larger quantities of aid substantially faster and cheaper than planes.

[1] https://apnews.com/article/gaza-starvation-israel-palestinia...

[2] https://apnews.com/article/israel-hamas-gaza-airdrop-humanit...


My department at the place I work is actively hiring Software Engineers. We have nine open requisitions for any seniority level and are regularly conducting interviews, but the new-grad candidates this year have been... disappointing.

I've conducted two phone screens this month and asked each candidate to implement FizzBuzz in their language of choice after giving them an explanation of the problem. Both took more than ten minutes to write out a solution and we don't even require them to run it; I'll excuse trivial syntax errors in an interview setting if I can tell what you meant.

When CS students can't write a basic for loop and use the modulo operator without relying on AI, I weep for their generation.


I also tutor students in the entry level C++ and Python courses (which are taken during your first two semesters as a CS student), and I must agree that a large cohort of my class is only able to program if they have ChatGPT/Claude open on one half of their screen. I'm not sure how to solve this either, unless we want to start doing in person "interview" styled questions as an exam on a locked down computer.

I honestly think that doing an in person fake technical interview with a few easy Leetcode questions at the end of your education would be a good way to weed out those that have failed to even learn the basics of the trade.


I'm so old I remember when calculators started appearing in general people's hands. Schools first banned them (what da ya mean you can't add a column of numbers by eye?) But gradually we switched over. We had a flirting interaction with log tables, and never did get to use a slide rule. I've no doubt old-school businesses were aghast at our ineptitude.

I'm so old we learned to program with giant C reference books. There was no internet, much less Google. We didn't have no fancy auto-complete, crumbs a text editor was considered advanced. Them youngsters coming to us couldn't program without Googling syntax, or using an IDE.

So yeah, sure, AI is changing the game. It's hard to evaluate students because the tools they are using are different to our experience. For decades we "make them code" as a measure of ability. In 3 years (their college experience) the toolset has changed.

Good students, good employees, are those who understand the problem and can adapt to a solution. AI is a tool that can be wielded well, or badly. Our approach to hiring will need to adapt as well. But good people are still out there, and good people make good workers.

To be honest I never was much in love with the leet code measure of hiring. Past a certain coding skill level I was more interested in the person than their ability to memorize an algorithm. Today that necessary skill level is lower, or at least harder to evaluate, but the problem-solving-mind is still the thing we're looking for.

So be careful of seeing the use of new tools as a weakness. The history of the world is littered with obsolete technologies. (Here's a sextant, where are we?) Rather see people who use tools for what they are, tools. Look for people who are curious, who see patterns, who get things done.

And to students I say, mastery of tools is a necessary step, but ultimately an uninteresting one. See beyond them. Be curious. Look under the hood. Ask questions like "is this code good enough to be running 30 years from now?" Because a huge amount of what you see now has foundations in code written a long time ago, and written well enough to stand for decades.

College is not "learning to program". College is learning how to adapt to an ever changing world, that will require your adapting many times over your career.


> College is not "learning to program". College is learning how to adapt to an ever changing world, that will require your adapting many times over your career.

You're gonna have to do a lot of work to convince me that people who only know how to drive an LLM are learning how to adapt to sweet fuck all

At least with a calculator, people still had to know the difference between addition and multiplication, in order to use the calculator correctly


> You're gonna have to do a lot of work to convince me that people who only know how to drive an LLM are learning how to adapt to sweet fuck all

Driving an LLM properly requires knowing to evaluate if the results are correct. People can certainly try to pass generated code over for PR. But even just one code feedback or debugging should uncover if the person understood what they were doing.


What if driving an LLM well is actually a desirable skill?

What if changing from a "write code" based idea of programming changes to a "remove technical debt from code" skill?

What if the next generation of programmers is not focused on the creation of new code, but rather the improvement of existing code?

What if the current crop of programmers has to literally adapt from a world that has valued code quantity to a world that values code quality (something we dont especially prioritize at the moment?)

I'd argue that we're asking the current generation to be massively adaptable in terms of what was expected of us 10 (or 30) years ago, as to what will be required of them 5 years from now.

And to be clear, I'm not suggesting that LLMs will teach them to be adaptable. I'm suggesting that a world that contains LLMs will require them to be adaptable.


> What if changing from a "write code" based idea of programming changes to a "remove technical debt from code" skill

I don't believe you can do this if you can't write code, but sure. Maybe

> What if the current crop of programmers has to literally adapt from a world that has valued code quantity to a world that values code quality

LLMs seem more likely to increase the value of quantity and decrease the value of quality. That's playing out in front of us right now, with people "vibecoding"

> I'm suggesting that a world that contains LLMs will require them to be adaptable.

And ones who can't adapt will be ground to mulch to fuel the LLM power plants no doubt


Your response just triggered a deja-vu from back when scaffolding tools were the new hot thing, now everyone and their dog was able to spin up that todo application within one CLI command. Except the generated code was mostly boilerplate that had to be heavily adapted for any real life use case, unveiling all the ignorance that could be covered up to that point. It's the same with vibe code. Looks fun until you throw it into reality - and then you're on your own and better know how to deal with stuff.


i don't think you can compare Calculator to LLM.

A calculator will always give you correct result as long as you give it correct input. This is not the case with LLM. No matter how good your prompt is, there always a chance the output is completely garbage.


One big problem from the hiring side is the time to evaluate someone once complex tools are involved.


did you ever consider the idea that AI is not the same as a calculator? or consider the fact that there is no reason why there couldnt be another quantum leap next year? and another one after that?


I teach computer science at a community college in Silicon Valley. Even before generative AI became available to the general public, cheating has been an issue with CS programming assignments.

One way I try to disincentivize cheating on projects is by having in-class paper exams, including weekly quizzes, as well as in-class paper assignments, and making sure that these in-class assessments are weighted significantly (roughly 60% of the overall grade). No electronic devices are allowed for these assignments. This forces my students to be able to write code without being able look up things online or consult an AI tool.

I still assign take-home programming projects that take 1-2 weeks to complete; students submit compilable source code. Practical hands-on programming experience is still vital, and even though cheating is possible, the vast majority of my students want to learn and are honest.

Still, for in-person assessments, if I had the budget, I’d prefer to hand out laptops with no Internet connection and a spartan selection of software, just a text editor and the relevant compiler/interpreter. It would making grading in-class submissions easier. But since we don’t have this budget, in-class exams and exercises are the next best solution I could think of.


This reply will likely sound disrespectful, but I post it not to be so, but rather to perhaps spark an alternate path.

As the world changes, education can be slowest to adapt. My father did his math on a slide rule. I was in high school as we transitioned to using calculators.

My personal take on your approach is that you're seeing this from the wrong side. Creating an artificial environment for testing suggests to me you're testing the wrong thing.

Of course most school, and college, classes devolve to testing memory. "Here's the stuff to learn, remember it enough to pass the exam." And I get it, this is the way it's always been, regardless of the uselessness of the information. Who can remember when Charles 1st was beheaded? Who can't Google it in an instant?

Programing on paper without online reference tools isn't a measure of anything, because in the real world those tools exist.

Indeed, the very notion that we should even be testing "ability to write code" is outdated. That the student can create code should be a given.

Rather an exam should test understanding, not facts. Here's 2 blocks of code, which is better and why? Here's some code, what are the things about it that concern you?

Instead of treating the use of AI (or Google, or online help, or that giant C reference book I had) as "cheating", perhaps teach and assess in a world where AI exists.

I truly do get it. Testing comprehension is hard. Testing understanding is hard. Testing to sift wheat from chaff is hard. But, and I'm being harsh here i know, testing memory as a proxy for intelligence or testing hand-code-output as a proxy for understanding code is borderline meaningless.

Perhaps in the age of AI the focus switches from 'writing code' to 'reading code'. From the ability to write to the ability to prompt, review, evaluate and so on.

Perhaps the skill that needs to be taught (to the degree that community college seeks to teach skills) needs to be programing with AI, not against it.

I say all this with respect for how hard your job is, and with my thanks that you do it at all. I also say it understanding that it's a huge burden on you that you didn't necessarily sign up for.


The problem is that tools like AI are useful if and only if you have the prerequisite knowledge, otherwise they are self-destructive.

It's similar to a calculator. We give student graphing calculators, but ONLY after they have already graphed by-hand hundreds of times. Why? Because education does not work like other things.

Efficiency, in education, is bad. We don't want to solve problems as fast as possible, we want to form the best understanding of problems possible. When I, say, want to book an airplane ticket, I want to do that in the fastest way possible. The most efficient manner. I care not about how an airport works, or how flight numbers are decided, or how planes work.

But efficient education is bad education. We can skip 99% of education, if we wanted. We can have, say, the SAT - and spend 1 year studying only for the SAT. Don't bother with the other 12 years of schooling.

Will you get an acceptable score on the SAT this way? Maybe. Will you be intelligent? No, you will be functionally illiterate.

If we use AI for programming before we can program, then we will be bad programmers. Yes, we can pass a test. Yes, we can pass a quiz. But we don't know what we're doing, because education is cumulative. If we skip steps, we lose. If we cut corners, we lose. It's like trying to put a roof on a house when the foundation isn't even poured.


I wish I could have gone to these schools where testing is just memorization. Everything would have been so easy


I'm not sure how to solve this either, unless we want to start doing in person "interview" styled questions as an exam on a locked down computer.

Don't lock down the computer unless you are hiring people to work in a SCIF. Instead, give candidates a brutally hard/weird problem and tell them to use any resources they can get their hands on, by fair means or foul. (They will do that anyway if you hire them.) Then watch how they deal with it.

Do they just give up and stalk off in a huff?

If they Google for answers, do they use sensible queries?

If they use AI, do their prompts show skill at getting ideas, avoiding blind alleys, and creating effective tests?

If they call their friends, see how effective they are at communicating the requirements and turning the answers into a solution. Might be management material.


I’ll second this, and we had enough resumes to only interview those with a relevant Master’s degree. I was shocked and I still don’t have a full explanation. I don’t doubt that it’s also hard out there, but on the hiring side we also did far more interviews than we wanted. (And yes the salary is >>100k, full remote, benefits etc)


> When CS students can't write a basic for loop and use the modulo operator without relying on AI, I weep for their generation.

I feel like this doesn't get said enough, but I'm almost certain your issue is happening during filtering prior to even getting to the interview stage. Companies are straight up choosing (the wrong) applicants to interview, the applicant fails the interview, the company does not move forward with them, and then the companies does not go back and and consider the people they originally filtered out.

I know companies get swamped with tons of applications, and filtering is basically an impossible problem since anyone can make their resume look good, but every applicant that applied can't be that bad.

Bad applicant filtering at the first step is hurting both companies and applicants.


2 data points and you're drawing a conclusion about an entire graduating class? For all we know, you might be experiencing a reality that you're company isn't able to attract great young talent.


FizzBuzz was always a great filter. Even in the pre-LLM days. Many people can code for years and never once use the modulo operator. Solving the problem gets a lot more clunky without it and they get rejected.


Yes but also it is one of the most common programming questions for non-FAANG companies. Are grads not preparing for interviews? It is one google search to Jeff Atwood’s blog.


When I was in school in the early 2010s I was working in a professor's lab and overheard conversations that the administration was telling profs/TAs to pass kids who profs/TA's thought should have failed. I've since seen the required coursework to graduate become less rigorous. There were students I worked with personally I graduated with who were very bad. I'm sure there still great students who care about learning but I cannot imagine how bad the average student is with ChatGPT being able to do student's assignments.


Are you offering enough pay that competent people would want to work there?


We're in the greater Seattle area and I make north of $200k, so I feel like yes :shrug:


My experience is that this problem significantly predates AI. Not that AI won't make it worse, but pre-2020 the majority of entry level developer applicants I interviewed could not write a basic for loop in their choice of language, never mind the modulo operator.


> the price seems way high.

How much is your child's finger worth?

I'm looking at getting a SawStop table saw so I can teach my child woodworking with slightly more peace-of-mind that if something goes wrong, they'll be less likely to lose one or more fingers. Kids get distracted, they forget what rules you've taught them in the past, accidents happen.

This is also a tool I'll consider purchasing to provide my child an introduction to the concepts before graduating to the bigger, louder, stronger wood saws.


Or skip the power tools to begin?

I use a:

https://bridgecitytools.com/products/jmpv2-jointmaker-pro

and have worked with a number of kids to make small projects using it (and hand saws/drills/yankee screwdrivers/braces/planes)

Their Chopstick Master is a great introduction.


That's very interesting! Thanks for the recommendation, definitely something I'll consider.


There's an interesting planet money podcast[0] about SawStop and why it's not a bigger thing in the world. TLDR: the big power tool companies didn't want to pay to licence the tech, so evidently came to some mutual agreement to ignore it as a feature to save customer fingers.

[0]:https://www.npr.org/2024/10/11/nx-s1-5135668/planet-money-wh...


> I'm still a bit upset that they took away vertical taskbars from us

After getting a new W11 laptop at work and discovering the removal of vertical taskbars (ridiculous on 16:9 monitors), that particular loss is specifically what keeps me from "upgrading" from W10. Windows Updates warns me that "certain features aren't available on Windows 11" without specifying which ones; finding that sane placement of my taskbar is one of them means I'm plenty willing to pay $30 for another year of security updates to get to keep W10.


> most classes are not designed to be inherited but want to expose a clean public API.

> While true that all classes can be inherited

That's what the 'final' keyword in C++ is for: https://en.cppreference.com/w/cpp/language/final.html

"Specifies that a virtual function cannot be overridden in a derived class, or that a class cannot be derived from."


C# and Java both have `sealed` for that, too.


Nitpick: It is called "final" in Java, but your point stands. As I understand, C++ borrowed their "final" keyword from Java/C#. I'm not trolling when I write that last sentence. That is something that has really changed about C+ since the C++-11 committee was started -- and continues. When people write language / library proposals, they now regularly survey the landscape and talk about their experiences with other languages / libraries.


Technically Java has both `final` and `sealed`, and the distinction between the two is complicated, but `sealed` is the more powerful/capable/adjustable of the two. But the rest of your point stands and it does seem useful that C++ is paying more attention to the large ecosystems of its descendant languages and other relatives.


Holy shit: I stand corrected about sealed keyword in Java! It was added in Java 17. <Hat tip>


> I always thought it was weird that I had to apply with my transcripts and resume.

I similarly thought it weird when Garmin asked me for transcripts when I applied there a few years ago. It had been 15 years since I'd graduated, so I was lucky I still had a couple copies of my official transcripts from back then. After spending the effort to find and scan them in with my 3.8 GPA, didn't even get a phone screen.


The grandparent said

> support wireless CarPlay and android auto

Removing LTE doesn't cost me real-time traffic updates because (preferred maps app) is running on my phone which already has LTE. Streaming media? The media is being played from my phone or streamed via my phone, which already has LTE. I'm not sure what "remote controls" are in this context? Letting me set the A/C fan to high from Internet (almost certainly via a browser or app running on... wait for it... my phone)?

We've already paid for the LTE modems and app integration on the phone side of things, don't need to pay for it a second time on the car side or have to deal with the vehicle manufacturer's terrible implementations of navigation apps and media streaming services or yet another vendor collecting telemetry about me and reselling it to whoever wants to pay.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: