Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Some languages will wrap the value around, and some will actually abort

What if a language had clamped int and modular mod types ? Overflow on mod would work as advertised and, if you disabled abort, overflow on int would be closer to its virtual value than if wrapped.

  mod32 m = UINT_MAX+1
  //-> 0
  int32 i = INT_MAX+1
  //-> INT_MAX


Ada offers modular types (ranging from 0 to modulus-1 as expected) with the overflow and underflow behavior you want, but not a clamped type as you describe it. So you can get the first but not the second behavior.


So, Ada modular types are just like C uint ?


Except that it allows for an arbitrary modulus. For instance, you can have:

  type Whatever_Name is mod 10;
Removing restrictions is nice, lets the language and its type system actually express parts of the program logic unlike the inexpressive type system C provides.


Ok that's nice. Would one call that a dependent type ? I guess I would naively implement this by appending %10 to all assignments to a Whatever_Name.


No. Dependent types go further and allow you to do things like express that the length of an output vector/array is the same as some input number. That is, depends on some value (as in a runtime value). This mod type is fixed at compile time.


I don't know of any language that has these types, i personally find them not so handy. What would e.g. happen if you mix both types? If you want mod or clip functionality for a certain part of an algo, it is easy enough to make a function that does it, you can overload the + operator if you want the algo code to read or write more easily. Also, in C it is simply modulo (straight what the CPU does) and in python there's simply no overflow, every int is an object that can be as big as needed. Both takes are consistent and rather straightforward to understand.


As for saturated arithmetic you can (as in many languages) overload operators (with added post condition that you can check at runtime, or prove statically, if you kept the SPARK provable subset of Ada).


> What would e.g. happen if you mix both types?

Like in

  int32 y=INT_MAX
  mod32 x=1
  y+=x
? You'd get y==INT_MAX. Or did you mean something more involved ?

> you can overload the + operator

I fail to see the need. (Sorry I'm a bit tired). In this thread context of static languages, your compiler knows the target type of an assignement (clip/mod) and so applies the right correction.

> in python there's simply no overflow, every int is an object that can be as big as needed

I guess there's a huge speed penalty in wrapping values into objects.


I'm not sure if the target type is always clear.

Y=(x+x)+y

What is the target type for x+x?

You could argue that you can have the same problem with operator overloading. But then again, it is best to implement the class just for a specific (type of) algo whithin which you use the overloaded + operator (or maybe better another symbol, resembling a + sign) in an unambiguous way. Also the class might have more functionality specifically suited for that type of algo. The benifits are less ambiguity and, that normal operations keep being fast (directly use the cpu arithmetic functions); the code is only slower where clipping or something the like is needed.

Python is indeed slow and resource intensive, that's where libraries like numpy fit in. It uses vectorized native types (numpy ints and floats are native machine types), together with vector operations running in a precompiled low level language. Python only sees the 'outside' of the vectors as python objects.

I once wondered why my language did not have native support for fixed point type operations. Once i started implementing custom types for and a class supporting fixed point, i noticed the sheer amount of design choices that were needed. A concise generic implementation of it is nearly impossible.


(Hoping you see this. HN's lack of reply-alert is not super practical)

> What is the target type for x+x?

There's no 'target' except the assigned 'y'. The compiler would just do

  y = CLIP((x+x)+y)
with x and y cast to int64 in the expression or smth like this.

> i noticed the sheer amount of design choices that were needed

I was just imagining these types. You are most certainly right that it would be pretty involved to get right.


Okay that type of 'target' i have never seen in pactice, it would mean that if you would write the same equation using 2 statements, the casting would be different because you then would have 2 targets. Anyway, there are endless ways of designing a language, all with pros and cons. Nice exercise of thought.


Rust has those types as Saturating<u32> and Wrapping<u32>.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: