The cracked engineer arbitrage
Verification in the age of the prompt
If you missed last week’s newsletters, catch up here and here. If this newsletter was forwarded to you, subscribe so you don’t miss the next one:
TOGETHER WITH CREDIT DIRECT
The “Detty December” effect hit the FX market in two different ways. We saw a 0.77% gain in the NFEM, yet the parallel market weakened further to ₦1,479/$.
Our research note analyzes the drivers behind this widening gap and the 1.87% bump in foreign reserves. It also explores why foreign equity outflows hit ₦36.66bn just as we entered the festive season.
Andela’s assessment stack
If you’re a company that hires software engineers to place with clients, a.k.a a talent marketplace, you need to hire mid-level to senior engineers pretty regularly because clients aren’t blowing up your phone requesting junior devs. Big clients want engineers with deep experience (In its earlier Africa-heavy phase, Andela acknowledged that demand skews toward senior talent.)
Just about anyone can publish a LinkedIn post and hire a software engineer. Not everyone can offer a surefire guarantee that the engineer they’ve sent is actually elite
I assume Andela’s pitch is roughly something like: “Here is an engineer you will never meet in person, living in a country you might never visit, working on software you desperately need. While the engineer is the underlying asset, what Andela monetises is the risk reduction wrapper that says, “This one will work.”
The pre-AI hiring process was imperfect; people fibbed on their resumes and exaggerated their contributions, but you could still pretend the process measured something objective.
Artificial intelligence has muddied the waters. Hiring managers now have to figure out who’s juicing outcomes and looking better than they really are. AI can make great engineers exceptional, but it also makes mediocre engineers look hireable long enough to pass a 45-minute screen.
Interviewing.io recently ran a controlled experiment that shows some of the issues with traditional interviews. They split 32 experienced engineers (4+ years) into groups to perform audio-only coding interviews while secretly using ChatGPT.
On LeetCode problems asked verbatim, the pass rate was 73% (vs. a 53% baseline). Even on “modified” LeetCode problems, the pass rate remained high at 67%. Minor tweaks didn’t stop the LLM.
On truly custom questions, the pass rate plummeted to 25%. Crucially, this was lower than the baseline, suggesting that relying on AI as a crutch for unfamiliar problems actually hindered performance.
Interestingly, no one was caught. Despite surveys designed to surface suspicion, interviewers failed to flag a single candidate for cheating.
If your product is confidence, what happens when confidence is easier to fake?
You improve your ability to make decisions.
In March 2023, Andela acquired Qualified, a developer assessment platform, and in May 2023, it acquired Casana, an IT talent network out of Munich. This week, it disclosed its acquisition of Woven, a company known for high-fidelity technical assessments that simulate real engineering work.
Andela did not disclose the terms for any of the three acquisitions.
The Woven acquisition feels like a telling buy for the AI moment.
Launched in 2018, Woven raised $2.5m in a 2020 seed round and $8 million in a Series A round. Its pitch is that every candidate’s submission is reviewed by two human engineers using an “obsessively detailed” rubric.
It also emphasises integrity checks to detect plagiarism or undisclosed AI assistance.
Per Andela’s statement, Woven is built on top of Qualified, the aforementioned developer assessment platform. So you can read this as a consolidation of the assessment stack..
Andela’s announcement also mentions a desire for “AI-native engineers” who can build with AI. Is it contradictory to police AI usage during an assessment? Not in 2026. Hiring now requires two distinct measurements:
Baseline engineering fundamentals: Can you reason, debug, review code, and understand tradeoffs without outsourcing thinking?
AI-in-the-loop performance: Given the tools you’ll actually use on the job, can you ship good software, safely, with taste and judgment?
It’s perfectly rational to want candidates to be excellent with AI while still refusing to let candidates use AI to fake fundamentals in an evaluation designed to measure fundamentals.
If you enjoyed this newsletter, please like, share or leave a comment (or do all three. Why am I limiting you?)
See you on Sunday.








Hey, great read as always. That part about 'risk reduction wrapper' really hit home. Verifying engineering talent, especially globally, is such a massive chalenge. Pre-AI fibbing was one thing, but AI will change vetting profoundly. Curious to see how this evolves.