This is where databases that support row values can be helpful. The original query:
SELECT count()
FROM scale_data
WHERE section = ?
AND id2 = ?
would be re-written with row values as:
SELECT count()
FROM scale_data
WHERE (section,id2) = (?,?)
In such a case, the database will revert to a table scan if it can't find an index that matches all of the columns (in left-to-right order) in the row value on the left-hand side of the expression, which would be immediately apparent in the actual performance of the query and the query execution plan.
That doesn't seem more helpful to me... it's just making the worst case worser.
In a situation which other queries would benefit from the id1, id2 order, and it wouldn't pay off creating another index, using a index seek instead a table scan is way better.
Making the worst case worser is the point. My assumption here was that the issue was catching this type of optimization problem during development, in which case using row values would help. The idea isn't that a table scan is better than an index scan, but that an index scan that isn't very selective will progressively get worse in terms of performance as the table grows, and possibly not be caught during development. Row values are the developer's way of saying "I want this condition to match an index on all columns, or not match at all".
SELECT count() FROM scale_data WHERE section = ? AND id2 = ?
would be re-written with row values as:
SELECT count() FROM scale_data WHERE (section,id2) = (?,?)
In such a case, the database will revert to a table scan if it can't find an index that matches all of the columns (in left-to-right order) in the row value on the left-hand side of the expression, which would be immediately apparent in the actual performance of the query and the query execution plan.