When polymorphism bites you in the ass

A curious problem I faced recently when Java polymorphism met Scala polymorphism and they gave a birth to an ugly baby that I had to confront.

A short prehistory: we have a huge monolithic Scala web-application that was written 4 years ago and was barely maintained since then (however, it worked surprisingly well all that time). There are a lot of Java dependencies used in the project and despite the claims that Scala has “seamless Java interop”, it’s not all rainbows and unicorns.

“Error: ambiguous reference to overloaded definition…” WHAT?


We use an excellent Mockito framework in unit-tests, but it was never upgraded from v1 to v2, so we decided to fix that since a lot of nice improvements were introduced in v2 that might make our unit-tests cleaner and better.

The upgrade was mostly seamless, however some methods changed their signature, in particular doReturn now had two versions:

// similar to the old version that we use in our tests
public static Stubber doReturn(Object toBeReturned)

// a new, varargs version
public static Stubber doReturn(Object toBeReturned, Object... toBeReturnedNext)

So basically the second version is a convenient new way to check more than one values at once:

// uses first version of the method

// uses a new, second version
doReturn("bar", "foo", "qix").when(mock).foo();

Mockito is targetting Java and apparently Java is perfectly capable of recognizing when a single-argument version is called and when — the varargs one. However, when calling the same method from Scala we get an exception:

Error: ambiguous reference to overloaded definition,
both method doReturn in object Mockito of type (x$1: Any, x$2: Object*)org.mockito.stubbing.Stubber
and  method doReturn in object Mockito of type (x$1: Any)org.mockito.stubbing.Stubber
match argument types (String)

What this error means is that Scala can not recognize which version of the method has been called. Scala compiler thinks that both methods are equally valid candidates for that case. Now that’s interesting. Let’s try to understand why.



When we call it with 2 or more arguments, a compiler sees right away that the second method is applicable, so no further investigation is performed, case closed, everyone can go home.

However, when we call doReturn method with 1 argument, both versions are applicable! And the reason is that vararg parameter toBeReturnedNext can receive any number of arguments including… 0.

Now, when Scala compiler sees this kind of embarassing situation, it performs a so-called overloading resolution by trying to understand which one of two methods is more specific. And for some reason both methods are equally specific in our case, hence the ambiguous reference to overloaded definition error.

Let’s look at those two methods again:

// A
public static Stubber doReturn(Object toBeReturned)

// B
public static Stubber doReturn(Object toBeReturned, Object... toBeReturnedNext)

It’s possible to call B with parameters of A because as we already figured out, toBeReturnedNext can contain 0 elements. Fair enough.

But why on Earth would compiler think that it’s also possible to call A with the parameters of B when there are 2 or more arguments? Method A receives only one argument isn’t it?

“I love the smell of implicit polymorphism in the morning”, — Martin Odersky


Now this is when one of the most controversial Scala feautures comes to play called “auto-tupling”:

If a function receives one parameter of a type that can be infered to a tuple (Object, AnyRef, etc.), we can call it with more than one arguments and these arguments will be implicitly converted into a tuple! Now that’s fun. Take a look:

// tuple conversion is applicable because Object can be infered as a tuple
def f = (x: Object) => println(x)

f(1, 2, 3)
// output: "(1, 2, 3)"

// no tuple conversion here because a type of parameter x is more specific and can not be a tuple
def m = (x: List[Int]) => println(x)

m(1, 2, 3)
// output: "too many arguments (3) for method apply..."

[live example]

The aim of auto-tupling is just to avoid weird looking ((a, b, c)) syntax. Martin Odersky

Ah well, at least it’s pretty.

So given the fact that auto-tupling is possible, it’s possible to call A with the parameters of B since they can be converted into a tuple which makes A and B equally specific during overloading resolution.

There is hope


Now it’s unlikely that it’s going to be fixed in Scala 2. But there is hope.

Dotty still uses auto-tupling but not as pervasively as nsc. In particular, auto-tupling is done after overloading resolution. It’s a last effort attempt if things do not work out without it. Martin Odersky

Another quote:

Note that compared to Scala 2.x, auto-tupling is more restricted. … it does not apply if the function in question is overloaded. This avoids problems like accidentally picking an overloaded variant taking an Object parameter when some other variant is intended but the right number of parameters is not passed. Martin Odersky

So auto-tupling is not going anywhere, but at least Scala 3 will not shit its pants during overloading resolution.

The following code will throw ambiguous reference... error in scalac, but will work in dotty (a fixed arity version is given a higher priority):

class X {
  def f(x: AnyRef) = x
  def f(x: AnyRef, y: AnyRef*) = x

val x = new X

scalac: [live example] dotty: [live example]

But what to do now?

I’d argue that it would probably be better to not have this kind of polymorphic mess in the first place. But if we’re not controlling the code (like in our case with Mockito), we can:

  1. Create a Java class that will wrap ambigious methods.
  2. If we don’t care which version is called, pass an empty sequence as a second argument to resolve ambiguity:
class X {
  def f(x: AnyRef) = x
  def f(x: AnyRef, y: AnyRef*) = x

val x = new X
println(x.f("a", Seq.empty: _*))

[live example]