The 60% Effort Rule in the Age of AI
How to Write Tasks for Humans and Machines

Years ago, I established a set of guidelines for defining tasks in our system. The philosophy was simple: ambiguity kills productivity. We introduced concepts like the "Analyst-Reviewer Conversation" and the "60% Effort Rule" to ensure that by the time we started coding, we knew exactly where we were going.
Today, we are introducing a new player to the team: AI Code Reviewers (like Atlassian Rovo).
This doesn't mean we throw away our old process. On the contrary, our original structure is more relevant than ever. The AI simply changes who validates our work.
Here is the updated guide on how to define a task effectively, ensuring it satisfies both the human developer and the automated agent.
The Core Philosophy: "Can I do this without programming?"
Before we even open a ticket, the "Analyst-Reviewer Conversation" must happen. The most important question remains: "Can I do this without programming?"
Yes: Use existing settings, parameters, maps, or global variables. Do not write code if configuration will suffice.
No: Then the goal changes. Can I build this feature in a way that allows us to do it without programming next time?
I consider this question a structural turning point in how we design features. Treating every client request as a potential configuration problem rather than a coding problem forces us to think in terms of capabilities, not patches.
Instead of implementing one-off logic, we ask whether the requirement can be absorbed into the system as a reusable mechanism. When the answer is yes, the application evolves by extending its own degrees of freedom: what previously required development becomes a matter of configuration. This is how software transitions from a collection of special cases into an adaptive platform. This is what coreBOS and EvolutivoFW are!
Consistently applying this principle changes the trajectory of the product. It encourages us to build infrastructure that preserves existing behavior while expanding what the system can express. Over time, this reduces future development effort, increases operational flexibility, and allows the application to respond to new business demands without continuous structural change.
In short, this question reframes feature requests as opportunities to increase the system’s long-term adaptability rather than merely satisfying the next requirement.
But this question raises another concern: “How do I know if it can be done without programming?”
If you are unaware that a global variable or a configuration setting already exists, you will naturally default to writing code, which is exactly what we want to avoid. Overcoming this "unknown unknown" requires a culture of Shared Knowledge. You cannot operate in a silo. You must feel empowered to ask—and specifically, to ask those teammates who have a history of sharing and mentoring.
This is where Training and Reading become your professional responsibility. You must actively study the system to understand the tools available to you. But the most critical piece of this puzzle is Documentation. You must write the documentation you wish you had found. Every time you solve a problem via configuration rather than code, document it. This acts as a "pay it forward" mechanism: it ensures the next person finds the answer in our knowledge base rather than reinventing the wheel in the codebase.
So, depending on the answer to the question of whether we can create new infrastructure or not, we arrive at one of two situations:
Yes: Ask permission and validation because this path will take more time than a direct hack solution. It is time you will save next time this requirement is needed
No: Develop the easiest compatible solution
If the answer is "We must code", the discussion that happened here holds the information we need to continue and create the ticket.

Title and Description: The "Human" Layer
The top half of the ticket is for the humans. It provides the "Why" and the context that an AI cannot fully grasp.
Title: This must be a concise summary. Ideally, this text should be clean enough to serve as the Commit Message later.
Description: This is the detailed explanation. It should include steps, links to designs, screenshots, and the business context.
- 💡Note: You can write this in your team's native language. The AI doesn't strictly need to "understand" the business benefit (like reducing churn), but your team does.
Validation: The "AI" Layer
This is the most critical update to our process. In my original guidelines, VALIDATION was a checklist for the human tester.
Now, the Validation section is a prompt for the AI.
When using tools like Atlassian Rovo or GitHub Copilot for PR reviews, they look for specific instructions to verify the code against. To make this work, the Validation section must follow strict "Machine-Readable" rules.
How to write the Validation section for AI:
Use the "Magic Words": The AI scans for a specific header. You must label this section "Acceptance Criteria" or "Definition of Done" (Case sensitive).
- 💡If you are in a rush, Rovo also recognizes the standard shorthands AC, ACs, or DoD. I recommend you use your system’s templating engine to write this for you.
With current tooling, Language Must Be English: Even if the rest of the ticket is in Spanish, the Validation criteria must be in English for the current generation of AI agents to verify it against the code.
- ⚠This is what I have read, which really surprises me, though, and probably will not be true if you are reading this in the (near) future.
Be Binary (Pass/Fail): The AI compares the text to the code.
Bad: "Check that the API works well." (Subjective).
Good: "API Endpoint
POST /v1/tickets/exists and accepts parameterid." (Measurable).
Example: The "Translation" Strategy
We separate the Context (Description) from the Verification (Validation).
| Ticket Section | Audience | Language | Content |
| Description | Humans | Native (e.g., Spanish) | "We need to identify users who did X to prevent spamming them." (Why we are doing it). |
| Validation | AI Agent | English | Acceptance Criteria: Create boolean field has_done_x. Endpoint POST /x_done sets this flag to true. |
Atlassian has some good recommendations
Use AI to turn your brainstormed thoughts into clear, structured work item descriptions.
Use clear, unambiguous language.
Keep each criterion statement short and focused on one specific thing.
Break large epics into smaller stories with clear “done” conditions.
One last comment that is worth saying. Even in the time of AI all over the place, the rule of Garbage-In → Garbage-Out is still valid. AI does not reduce ambiguity; it amplifies it. If the criteria are vague, the review will be meaningless.
The "60% Stop and Ask Rule" is Now Automated
My favorite rule has always been: "At 60% of effort, stop and ask: Am I finished?"
The logic is that the remaining 40% of the effort is testing, documentation, and review. If you haven't finished the core logic by the 60% mark, you are off track.
The AI now enforces this rule for us.
When you open a Pull Request (usually around that 60-70% mark), the AI Code Reviewer scans your code against the Validation section. It instantly tells you:
✅ Criteria Met
❌ Criteria Missing
⚠️ Manual Check Needed
If the AI marks a criterion as missing, the answer to "Am I finished?" is objectively No. You don't need a senior reviewer to tell you that you forgot the database migration; the system catches it immediately.

Remember to keep stakeholders informed. If the dates are slipping, notify your peers. Keep the conversation moving!
Effort & Priority
Finally, remember that Effort is super important.
Dates: We may hate deadlines, but we must notify the team if dates slip. Keep the conversation moving!
Priority: always urgent
Tags
Stakeholders: informer, validator, …
An example
Title: Implementation of Call to Action Flag
Description (Context): To optimize marketing campaigns, we need to centralize the call-to-action status. Currently, there is a blind-spot whereas we do not know if the user has done the action or not.
The goal is to use our central application as a "Single Source of Truth" from where we can confidently determine if a contact should be included or not in an email marketing campaign.
Acceptance Criteria (Rovo will scan this section. Must be in English.)
1. Database: Add a boolean column cta_done to the Contact table.
2. API: Implement endpoint POST /v1/contacts/{id}/cta-done
3. Logic: The endpoint must be accessible only by a user with a valid API token.
4. Logic: The endpoint must accept a JSON payload with contact_id.
5. Logic: When the endpoint is called, update cta_done to true.
Business Goals (Manual Verification) (kept separate so Rovo doesn't flag them as "Missing Code")
• ⚠️ Churn: Verify that users with this flag stop receiving emails.
• ⚠️ Conversion: Verify that marketing spend is focused only on users with a 0€ balance.
Priority: Medium
Dates: 2026-02-20
Tags: API, Marketing
Effort: 1 day
Summary
The goal of a task definition hasn't changed: we want to avoid wasted effort.
The Description ensures the humans know why we are building it.
The Validation section ensures the AI can verify what we built.
By being strict with our Validation criteria—using English and specific headers—we turn our ticketing system into an automated quality assurance engine.





