I'm in the process of writing a library in Python, and I've run into a design problem concerning the use of polymorphism.
I have an ABC with abstract method 'foo':
class A(ABC):
@abstractmethod
def foo(arg1: int, arg2: int) -> bool:
pass
Subclasses of A are part of the public API, but A and its subclasses are not intended to be subclassed by the user.
Most subclasses will implement 'foo' with the same signature as the ABC, but some subclasses will have a few extra default arguments:
class B(A):
def foo(arg1: int, arg2: int, arg3: str = "bar") -> bool:
return True
class C(A):
def foo(arg1: int, arg2: int, arg3: str = "bar", arg4: float = 0.42) -> bool:
return True
The types of the default arguments are well-defined, ie "arg3", when in use, will always be str, "arg4" will always be float, etc. Also the corresponding default values are stable, ie "arg3" is always "bar" by default, etc.
Now, some other internal classes want to call the foo method on a bunch of subclasses of A at once. Using isinstance checks works, but is of course rather cumbersome:
class D:
def call_foo_on_child_classes(child_classes: list[A]) -> Iterator[bool]:
for child_cls in child classes:
if isinstance(child_cls, B):
yield child_cls.foo(1, 2, "a")
elif isinstance(child_cls, C):
yield child_cls.foo(1, 2, "a", 0.42)
# etc
Having the child classes of A take *args **kwargs would also work, but seems like an equally bad solution, because method 'foo' is also part of the public API.
I came up with the following idea, and wanted to check if this is a valid way to go about it, or if I should fundamentally rethink the design of my classes instead.
To the ABC, I add a private abstractmethod, that gathers up all the default arguments of the child classes:
class A(ABC):
@abstractmethod
def foo(arg1: int, arg2: int) -> bool:
pass
@abstractmethod
def _call_foo(arg1: int, arg2: int, arg3: str = "bar", arg4: float = 0.42) -> bool:
pass
And the child classes would implement it as such:
class B(A):
def foo(arg1: int, arg2: int, arg3: str = "bar") -> bool:
return True
def _call_foo(arg1: int, arg2: int, arg3: str = "bar", arg4: float: 0.42) -> bool:
return self.foo(arg1, arg2, arg3)
class C(A):
def foo(arg1: int, arg2: int, arg3: str = "bar", arg4: float = 0.42) -> bool:
return True
def _call_foo(arg1: int, arg2: int, arg3: str = "bar", arg4: float = 0.42) -> bool:
return self.foo(arg1, arg2, arg3, arg4)
#etc.
And now class D can simply call the child classes of A polymorphically, while the public API of the child classes stays uncluttered:
class D:
def call_foo_on_child_classes(child_classes: list[A]) -> Iterator[bool]:
for child_cls in child classes:
yield child_cls._call_foo(1, 2, "a", 0.41)
Is this advisable?
_call_foo()
to the public API that everyone implements. Your subclasses don't have to use every parameter, but they have to be prepared to receive it. Alternatively, if you want different signatures forB
andC
, then get rid of theA
base class and instead use a union typeB | C
.