It depends on how technical you want to get. It's certainly not a loop in the classic sense, like a for loop etc. Like there are no `continue` or `break` semantics and you can't return out of it (and yes yes of course you can still do all that stuff, just not with dedicated keywords). I call it a loop, though. Whenever I make a lil' server in Elixir (mostly always for teaching) I always call the function `loop`:
defmodule Server do
def start_link do
spawn_link(&loop/0)
end
defp loop do
receive do
msg ->
IO.puts(msg)
loop()
end
end
end
Of course, that assumes that you write your functions so that they can be converted into iteration!
Did you know that some C/C++ compilers will do tail-call -er- optimization? I learned this quite a while ago, so I'd expect every major C/C++ compiler to do it by now, but I was pretty impressed at the time.
> Of course, that assumes that you write your functions so that they can be converted into iteration!
I was indeed assuming this, however due to my lack of compiler knowledge, not explicitly writing your functions--at least in the classic way--doesn't necessarily mean you Erlang functions won't be optimized. See section 2.3 [0]
Though again, I don't know shit about compilers and while I can imagine what callER optimized calls could look like, I would need some examples!
> ...while I can imagine what callER optimized calls could look like...
What do you mean by "callER" optimized calls? That's a term I think I'm quite unfamiliar with.
> Though again, I don't know shit about compilers...
Oh, I also know fuckall about compilers. I'm just a bumbler who has muddled through a moderately-successful programming "career".
IME, on the topic of recursive functions, the big difference between Erlang and -say- a C++ program compiled with GCC is that the latter has a specific (but surely configurable somehow) call stack size limit which terminates the program if you exceed it. Whereas Erlang's limit seems to be "Well, how much RAM do you have, and how much space do I need to store away the arguments for each call?".
When I mentioned that some C/C++ compilers do tail-call optimization, what I intended to say was that they converted the recursive call into something more like iteration. I'm pretty sure that historically [0] the only way you could do this optimization is if the very last thing a function did was to call itself... doing anything after that call meant that the conversion to iteration was not possible. I have no idea if things have gotten quite a lot fancier in the years since I made this discovery and now.
If Erlang had a call stack whose size was limited by something other than the available RAM in the system, then whether or not your functions were written tail-recursive [1] would be quite a lot important, I think.
[0] And maybe it's still the case today? Refer back to my near-zero knowledge of compilers.
[1] I hope I don't forget that term again. It's much more succinct than my jumble about "tail-call optimization"
Wait, is there something I don’t actually understand about recursion? When a recursive function calls itself, is that not a loop?