Domain Testing

New software is often tested through a variety of methods such as unit testing, feature testing and integration testing. Recently we’ve been making some major upgrades to the Icehouse Ventures investor portal and we’ve wanted to be sure that the changes to the software don’t change the business logic. I think the style of testing we’re doing might be useful for other teams to explore. We’re calling this new style of testing “Domain Testing”.

Domain testing focuses on validating the correctness of our application’s business rules and domain-specific logic. Rather than testing technical implementation details, “domain tests” verify that the system produces the correct outcomes in realistic business scenarios.

In our venture capital context, this means ensuring that complex financial workflows such as fund management rules or carry waterfall calculations behave as intended. For example, one of our domain tests verifies that when a startup company generates a return, the correct amounts are shared across investors.

All software teams have been doing some kind of “scenario testing” forever in various forms, but we haven’t always had a consistent name for it. Domain testing focuses in on testing a specific business function. The goal is to prove that real-world business rules hold true, using concrete examples that reflect how the software is actually used.

Domain Testing vs Existing Types of Tests

  • Unit Tests validate individual methods or classes in isolation. Checking that a specific function returns the expected number given certain inputs.
  • Feature Tests cover entire user-stories, or features potentially including the user interface, controllers and the database, often crossing multiple domains or services.
  • Integration Tests verify that different parts of the system interact correctly. In Laravel, this might mean simulating an HTTP request and asserting that the controller, ORM, and database all work together.
  • Domain Tests validate specific business scenarios. They don’t necessarily go through the UI or HTTP layer. Instead, they check domain logic directly in the code. The key difference is that domain tests explicitly encode business intent.

For example, consider this finance module scenario:

Given a fund of $10 million with an 8% preferred return hurdle,
When the fund earns $2 million of returns in its first year,
Then the investors (LPs) should receive their preferred return before any carry is distributed to the fund manager.

This reads like a real scenario. It verifies that the carry distribution follows the defined waterfall (a critical business rule) without getting off track into implementation details.

Test Driven Development for AI

The long established practice of “Test Driven Development” could take on new importance in the era of AI-assisted development. Modern LLMs like Claude and ChatGPT are quickly evolving from simple code auto-completion tools into reasoning agents that can create or update whole new features. But as the scope of AI development widens, the consequences of errors is becoming more serious.

A solid test suite serves as both a safety net and interestingly as a guide when an AI helps generate code. LLMs are powerful but prone to hallucinations or subtle logic errors.

By writing tests upfront (especially Domain Tests that define expected outcomes) we create guardrails that keep the AI on track. The developer specifies the correct business behaviour in the tests, and the AI must work within those constraints. This turns TDD into a way of “teaching” the AI what we actually want.

Domain Tests as Executable Documentation

Domain tests create a shared language of expectations that both humans and machines can understand. Written in terms of business scenarios using clear language and examples, they serve as executable documentation of what the software should do.

Developers, product managers, and even non-technical stakeholders can read these scenario-based tests and understand the system’s intended behaviour. The test suite becomes a living specification of the domain.

This clarity extends to AI systems too. An LLM assisting you can read your domain tests and see the business rules and constraints of your application. If it proposes a refactor or new feature, you can re-run the domain tests to verify the AI didn’t break any fundamental business rules. Our domain tests encode the business rules in a form that a machine can work with.

Domain-Driven Design

DDD teaches us to model software around the core business domain. In domain testing, we leverage these same concepts. One of the concepts we borrowed from DDD for our domain tests was “inspectability”. Meaning that you should be able to put the system in the ‘test’ state and let a human look at it.

Normal software tests are usually randomised and empheral, not designed for human inspection. Using a specific, repeatable scenarios is like the Kobayashi Maru simulator test in Star Trek, the setup is consistent so you can explore different solutions that achieve the end goal.

Practical tip: We use Laravel’s “factories” and “seeders” to set up our domain scenarios. This way we can quickly spin up a fake Fund with ten fake investors, each with certain ownership percentage, then create transactions to simulate cash flows. When your test reads like a story, you know you’re staying true to the domain-driven spirit.

Behaviour-Driven Development

BDD’s Given/When/Then syntax models test scenarios in a way both developers and business stakeholders understand:

Given: a scenario setup
When: something happens in our system
Then: the result is a predictable output.

This structured narrative helps communicate intent. A finance expert could glance at this test and confirm, “Yes, that’s exactly how the waterfall works.”

You can implement this style in Laravel with PHPUnit or Pest. The key is writing tests that tell a story: set up context (Given), perform an action (When), and describe the expected outcome (Then). For normal tests we use Arrange, Act, Assert which is also fine but slightly less suited to domain tests.

A Practical Way to Calibrate AI and Humans

Looking forward, I think domain tests may become a crucial calibration tool between business intent and AI execution. Think of them as the training examples you provide to a junior developer, except the dev will be an AI.

If an AI proposes a database schema change or generates a new Laravel controller, how do we ensure it didn’t break fundamental business assumptions? Run the domain tests. A comprehensive suite will immediately flag any business rule violations, giving both human and AI rapid feedback.

Domain tests also become a sandbox for business validation. Write a test that sets up a complex fund with edge cases (different fee structures, multiple closing dates), run the domain logic, and use Laravel’s debugging tools to inspect the output. It’s like a flight simulator for our business logic, we can try scenarios out in a safe environment.

Implementation in Laravel

Laravel makes it easy to implement domain testing:

  • Use model factories to create realistic test data that feels real.
  • Create dedicated test seeders that load the complete scenario in a repeatable way.
  • Leverage database transactions or RefreshDatabase traits for test isolation.
  • Structure tests around your domain services, actions and jobs, not just http endpoints.

Your tests become a place where everyone can see the essence of the application laid out in scenarios, where changes can be vetted with confidence, and where future tools can learn what “correct” means in your context.

Conclusion

Domain testing is a practical way to:

  • Validate business-critical behaviour with high confidence, if a change breaks fund carry distribution maths, a domain test will catch it.
  • Communicate intent across disciplines through tests written in business language, serving as living documentation.
  • Build on the best of TDD, BDD, and DDD without reinventing the wheel.
  • Prepare for AI-assisted development by creating tests that define boundaries of correct behaviour and logic.

As we rediscover testing principles in the age of AI, domain testing could be a language of shared understanding between humans, software, and intelligent agents. It ensures that no matter how our code is written (by us or by AI) the true intent and logic of our business domain remains solid.