An increasing unease is influencing how professionals interact with artificial intelligence as its capabilities grow in information creation and execution. Dan Pratl, founder of Quadron, sees this anxiety as a deeper structural issue beyond automation, impacting how value is recognized.
“We’ve reached a point where AI’s maturation makes almost everyone feel insecure,” Pratl observes, highlighting a disconnect between technological progress and systems rewarding human contribution. He notes current recognition and financial return frameworks have failed to evolve, devolving into speculative environments, citing crypto markets and retail-driven trading ecosystems.
Pratl argues AI is accelerating a longstanding shift. “AI commoditizes knowledge and its execution efficiently,” he explains. “What becomes scarce is the last mile—expertise, judgment, applying judgment.” He believes as knowledge grows more common and execution more automated, discerning high-quality work from low-quality output becomes harder, especially for non-experts.
This creates what Pratl calls a “meta problem,” where information volume increases but credibility verification systems lag. “For non-experts, all high-quality work appears similar,” he says, stressing current structures lack the ability to differentiate accurate insights from confident but baseless claims.
Pratl suggests visibility often replaces credibility in this setting. Social platforms, he argues, reward attention rather than accuracy, enabling “the loudest voices” to overshadow rigorous but less visible expertise. “There’s no system rewarding correctness,” he says. “No fast mechanism helps verify individuals and give non-consensus voices a platform.”
Pratl warns that spreading AI-generated content without reliable credibility signals threatens decision-making in sectors from business to healthcare. Research has shown online misinformation costs the global economy about $78 billion per year, underlining the gravity of the issue.
As a solution, Pratl proposes a credibility economy—a system designed to measure, verify, and reward expertise systematically. This model focuses on judgment and trust rather than solely output, creating mechanisms attributing value based on the quality and impact of decisions.
Quadron, his company, aims to build necessary infrastructure for such a system. According to Pratl, it involves three components.
First is an enterprise layer providing a finishing layer for organizational work. “I use several productivity platforms, yet a comprehensive finishing layer is often missing,” he says. This layer ensures individuals receive recognition for sound judgment and delivering validated outcomes, rather than just contributing to ongoing workflows.
The second component is a verification layer intended to modernize knowledge structuring and sharing. Pratl views existing intellectual property systems as outdated, inadequate for current knowledge exchange’s pace and scale. Quadron is developing mechanisms for insights to be exposed and evaluated securely.
The third element is what Pratl calls credibility markets, differing from traditional prediction markets by focusing on domain-specific expertise. “It’s not generalized speculation. You’re not betting on external events without understanding the odds,” he says. These markets calibrate credibility in real-time, matching individuals with relevant expertise and assessing judgment within appropriate contexts. “Organizations need context and structure with a different methodological approach. Individuals need rewards to organize information accordingly. We’re building systems to provide both.”
Pratl’s views draw from a career in law, open-source software, crowdfunding, and crypto, each revealing limitations in how systems incentivize and sustain involvement. Reflecting, he shares, “Many systems lacked structural integrity at the incentive level to endure beyond their creators, often losing alignment when initial motivations waned.”
A personal catalyst came from a medical crisis with his mother, where key information access proved inconsistent despite being technically available. “The information was centralized, but not truly accessible,” he says, describing a system misaligned with surfacing actionable knowledge.
Ultimately, the outcome relied on informal networks, a reality he deems untenable given current tools.
In the coming years, Pratl argues continued AI advancement will exacerbate these challenges absent new systems. Without mechanisms rewarding accuracy and highlighting credible expertise, he warns decision-making will increasingly depend on visibility or chance rather than informed judgment.
“We’re all experts,” he concludes. “Our expertise is valuable if structured and surfaced correctly.” He sees the credibility economy as a way to align technological advancement with human value, ensuring individuals remain active participants in AI systems while being recognized and rewarded for their contributions’ quality.
