Feb 17–Feb 23, 2026
We spent this week applying OpenClaw to real work instead of theory. There’s a lot of noise about AI replacing jobs, but what we’re actually building is an operations assistant that helps a small team move faster while keeping humans responsible for decisions. The theme of the week was simple: AI should reduce friction, not replace judgment. Here’s what moved forward:
1. Oracle Payroll: Turning Policy Into Working Logic
One major focus this week was compliance work inside Oracle E-Business Suite Payroll around NYC Sick/Safe leave rules. AI isn’t replacing Oracle consultants any time soon—systems like payroll still require deep understanding of how the payroll engine behaves, how Fast Formulas actually execute, and where edge cases break production.
But OpenClaw helped accelerate the thinking around the work. We used it to translate policy language into structured logic, explore Fast Formula approaches, assist in thinking through PL/SQL helper functions, and map potential failure scenarios. The assistant didn’t make decisions—it helped organize the problem faster so we could.
2. Testing Confidence: From ~16,000 Possibilities to 9 Scenarios We Trust
Payroll systems rarely fail on the happy path. They fail at the boundaries. Once you account for combinations across week 1 vs week 2 behavior, different hour distributions across days, qualifying vs non-qualifying time, and thresholds just under/exactly at/just over limits…the scenario space grows quickly. We estimated roughly 16,000 plausible combinations.
Instead of brute-forcing every possibility, we used AI to map the edges of the problem and identify where failures were most likely. From there we built nine targeted test scenarios that stress the real risk areas. That’s how we moved toward confidence: not by “trusting AI,” but by using it to accelerate structured reasoning and test design.
3. Infrastructure First: Fix Foundations Before Scaling Automation
We also spent time on less glamorous but important work—stabilizing infrastructure. An intermittent issue traced back to nginx and FastCGI temporary file permissions was causing unreliable behavior. Fixing it wasn’t exciting, but it reinforced something we keep seeing:
If infrastructure is unstable, AI doesn’t solve the problem—it just helps you fail faster. Reliability is still the prerequisite for automation.
4. Turning Chaos Into a Working System
Another project this week started with something very ordinary: seed packet photos from a small farm. The inventory existed—but only in scattered images, notes, and memory. So we worked backwards from reality.
First we collected about 245 seed packet photos as the source dataset. Then we used OCR with AI assistance to extract useful information like varieties, brands, and packet descriptions. Because OCR is messy, the process was designed for human review instead of blind automation.
From there we normalized the data—reducing duplicates, standardizing names, and organizing inconsistent text. Finally, instead of building a complicated custom app, we created a WordPress inventory dashboard so the team could use the system immediately. We added inventory fields like quantity on hand, operational fields like indexing visibility, and export tools so the system can be reviewed and used while it matures.
Adoption beats architecture at the beginning.
So What?
- AI accelerated complex payroll analysis while humans retained responsibility
- Testing confidence improved by reducing ~16,000 edge-case permutations into 9 scenarios we trust
- Infrastructure issues were fixed before they became outages
- A messy real-world dataset became a usable operational system
Next up: expand compliance test coverage and document edge-case behavior, harden monitoring so reliability doesn’t regress, and improve the inventory workflow now that the foundation exists.
AI didn’t replace the work this week—it helped us do it better.