UI shouldn't think about validation
February 27, 20264 min readUpdated February 27, 2026
I recently worked on abstracting the presentation layer from the UI layer. Ideally, the UI layer should be as dumb as possible when it comes to any non-UI logic. However, the presentation layer often suffers from a fundamental architectural flaw: it tries to decide what the UI should do.
This happens either directly (passing a specific error
String to display) or indirectly (passing a string resource Int identifier). The latter creates a false sense of decoupling — the presentation layer is still dictating the exact screen output.This approach breaks the Separation of Concerns (SoC). The solution is to provide enough contextual information for the UI to decide for itself how to display the error, without leaking presentation or domain logic.
My specific problem area was form input validation. I needed to show UI formatting errors (e.g., length limits) while keeping input validation strictly separated from domain invariants — a distinction many developers miss.
The initial solution in my head was simple: why not just provide enums?
kotlin
Where input is:
kotlin
It looks clean, but iterating on this revealed a problem: if an enum simply says
TOO_LONG, the UI still has to calculate the input size or fetch the max length from the domain to show a meaningful error message. It's not as dumb as I want it to be.Enums weren't the solution. Instead, I migrated to
sealed structures to represent validation errors. These data-rich issues give the UI enough context to render the error without forcing it to think or calculate:kotlin
Now, the UI receives every detail it needs. I eliminated the "decision context hunt" in the UI layer, as it no longer needs to know the presentation layer's implementation details or domain constraints.
Another benefit of this approach is validator composition. I can use common validators across different screens, or compose them with screen-specific custom validators when business logic varies. Some might argue this involves code duplication, but strict SoC (both at the layer level and locally) is always worth the trade-off. Also, I don't think it's a big bottleneck in the era of AI (but, honestly, even without it, it doesn't take that much time).