Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've always thought that (Oca)ML got it right here: Strict by default, but full lazyiness easily available.


It's not so simple in my mind. Sure it sounds good on paper to just throw out laziness and go strict by default, until you realize how much you are giving up by not having laziness the default.

My difficulty with Haskell was not that it was lazy, but that it was hard to tell how something would be evaluated without understanding it's entire evaluation model.

I think you could have a lazy by default language that makes this a lot more transparent but I'm still thinking about how that might look.


Are you familiar with Python's approach to iteration? Obviously Python itself is a world away from Haskell, but its model of lazy and eager iteration is very pleasant to work with, and with Haskell-style typeclasses it would be dead-simple.


Absolutely, along with .NET's IEnumerable/LINQ.

Laziness is definitely possible in a strict language via thunks (similar to how Haskell does it behind the scenes). However I didn't realize what laziness by default makes possible until I spent considerable time in the Haskell ecosystem. It really is a game changer.


> Are you familiar with Python's approach to iteration? Obviously Python itself is a world away from Haskell, but its model of lazy and eager iteration is very pleasant to work with

That isn't my experience. As soon as you're in a chain of transformations you have no idea what's lazy or strict.


This is true, but I don't often see people writing code like this in the wild. That's what I mean about it being actually easy with static typing -- you can have the compiler and/or your IDE check for you whether it's a lazy sequence (like a Generator in Python) or not (like a Set, List, etc).


Most of the time it would be polymorphic over whether the sequence is lazy though. And the kind of code that wouldn't behave properly with a lazy sequence tends not to be detectable statically; you could maybe rely on types to convey whether the author intended their code to work on lazy sequences, but realistically that probably wouldn't help you much.


Wouldn't the return value of the previous function in the chain make it obvious?

E.g. in Python you can have

    def foo() -> Generator

    def bar(x: Iterable) -> Sequence

    def baz(y: Sequence) -> List

    lst = baz(bar(foo()))
It's obvious to an experienced Python user (and to a static type checker) what is lazy here. All you need now is a compiler to enforce it.


You don't necessarily know which functions are going to be chained together though. If I write (my Python's a bit rusty):

    def f(x):
      return map(someOtherFunction, map(someFunction, x))
how can I know whether someOtherFunction interferes with someFunction if they run at the same time? The only way to know whether the maps are strict or lazy is knowing what x is, but x might come from some a different file or a different project even, so there's no way to know whether this function only works (and is only expected to work) for strict x or not. Actually even knowing what x is isn't enough, because even if x is strict, f can end up being called in a lazy way if I later do:

    (y for x in someGeneratorThatGeneratesLists for y in f(x))


That's my point about a static type system. You don't have that problem if types are known at compile time.

I agree that it's a nuisance in Python as-is (but I still like it overall).


> That's my point about a static type system. You don't have that problem if types are known at compile time.

You probably do though, because people would write f to be polymorphic and then only test with strict collections.


Agreed, and I think that's where post-Haskell languages are going (I've already mentioned Idris).


Sorry, I'm not too knowledgeable about these things. What is the strict-versus-lazy dichotomy? I wouldn't have thought that they had to do with each other.


Strict being the opposite of lazy - expressions are executed immediately.

Like for instance, given an expression...

    x = foo(13)
In a strict language foo(13) will be evaluated immediately.

In a lazy evaluation model, x be 'the result of foo(13)', but foo(13) won't actually be evaluated until some expression forces evaluation, by, for instance, printing the value.

This is a gross simplication... lazy can do some really neat things like having functions that return infinite streams. The function would theoretically never terminate, but in practice would be called via something like...

    for event in infinite_stream()
        .. do stuff ..
        if event == TERMINAL_VALUE:
            break


Is strict a commonly used word for the opposite of lazy? I've always seen it as "eager", but maybe that's in the context of other programming languages.


It's commonly used that way, but it's a slightly informal use: being rigorous, "strict" and "non-strict" describe semantic models; whereas "eager" and "lazy" describe evaluation strategies with those semantics. Ie, eager evaluation leads to strict semantics, and lazy evaluation to non-strict semantics.


Is it side effects that creates the need for strictness?


I wouldn't say so, just efficiency. The cost of booking for laziness isn't free. While it can potentially pay dividends when you're able to avoid evaluation altogether, it isn't free.


I'd say it's the need to reason about evaluation, which might be because of side effects but might also be because of performance (of course that can be regarded as another side effect). Laziness has theoretical advantages but in practice we haven't been able to get away from having to understand how our programs are executed, even in Haskell.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: