I think the worst thing about the golden age of symbolic AI was that there was never a systematic approach to reasoning about uncertainty.
The MYCIN system was rather good at medical diagnostics and like other systems of the time had an ad-hoc procedure to deal with uncertainty which is essential in medical diagnosis.
The problem is that is not enough to say "predicate A has a 80% of being true" but rather if you have predicate A and B you have to consider the probability of all four of (AB, (not A) B, A (not B), (not A) (not B)) and if it is N predicates you have to consider joint probabilities over 2^N possible situations and that's a lot.
For any particular situation the values are correlated and you don't really need to consider all those contingencies but a general-purpose reasoning system with logic has to be able to handle the worst case. It seems that deep learning systems take shortcuts that work much of the time but may well hit the wall on how accurate they can be because of that.
Symbolic AI ala Mycin and other expert systems didn't do anything that a modern database query engine can't do with far greater performance. The bottleneck is coming up with the set of rules that the system is to follow.
Early production rules engines really sucked, like a lot of the time they didn't have any kind of indexes and full scanned a lot. Good RETE engines with indexes didn't get mainstream by the 1980s but the industry was already losing interest. In a lot of ways
is pretty good as is the Jena rules engine but none of these have ways of dealing with uncertainty which are necessary if you're going to be working with language and having to decide which of 10,000 possible parses is right for a sentence. People used to talk as if 10,000 rules was a lot but handling 2 million well-organized rules with Drools is no problem at all today.
I think the problems of knowledge base construction are overstated and that a lack of tools are the problem. Or rather, the Cyc experience shows that rules are not enough, that is, after Lenat died it got out that Cyc didn't just have a big pile of facts and rules and a general reasoning procedure but it had a large database of algorithms to solve specific problems. That is, in principle you can solve anything with an SMT solver but if you actually try it you'll find you can code up a special-purpose algorithm to do common tasks before the SMT solver really gets warmed up.
Part of the production rules puzzle is that there never was a COBOL of business rules rather you got different systems which took different answers to various tricky problems like how to control the order of execution when it matters, how to represent negation, etc.
The MYCIN system was rather good at medical diagnostics and like other systems of the time had an ad-hoc procedure to deal with uncertainty which is essential in medical diagnosis.
The problem is that is not enough to say "predicate A has a 80% of being true" but rather if you have predicate A and B you have to consider the probability of all four of (AB, (not A) B, A (not B), (not A) (not B)) and if it is N predicates you have to consider joint probabilities over 2^N possible situations and that's a lot.
For any particular situation the values are correlated and you don't really need to consider all those contingencies but a general-purpose reasoning system with logic has to be able to handle the worst case. It seems that deep learning systems take shortcuts that work much of the time but may well hit the wall on how accurate they can be because of that.
[1] https://en.wikipedia.org/wiki/Mycin