Introducing the Superficial App and API

Product·September 2 2025·5 min read
Introducing the Superficial App and API

Today we’re introducing the Superficial App and API — our first product milestone on the path to reliable artificial intelligence.

Over the past decade, neural networks like the large language models many of us use daily, have transformed how we interact with information. These systems are now capable of assisting with increasingly complex tasks across a wide range of domains. But while their capabilities continue to expand, one problem remains unresolved: they make mistakes.

The best models today make errors in 15% of the claims they make. The only true remedy is human intervention — reading carefully, checking facts, stepping in to fix what goes wrong. This works in low-risk settings, but it doesn’t scale to the world AI is entering.

As AI systems move from being passive assistants to acting on our behalf, these errors don’t just mislead. They cascade. They compound. They occur at machine speeds that make human correction too slow to be effective.

This is the challenge Superficial was built to solve.

Superficial builds on decades of research in neural-symbolic computing — including foundational work on integrating learning and reasoning by Garcez et al. (2019) and recent advances in verifiable neuro-symbolic systems by Yu et al. (2023) — to release the first neurosymbolic system designed to make neural AI outputs verifiable, auditable, and dependable at scale. Superficial combines both strands of AI to eliminate the limitations of purely neural systems by combining fast, adaptive neural perception with structured, deterministic symbolic reasoning.

Across more than 55,000 claims from the Google DeepMind FACTS dataset, Superficial boosts the average accuracy of top AI models to 99.7% — a 9.9% relative improvement. Many of today's top models, including GPT-5, Grok 4, Gemini 2.5 Flash and gpt-oss score 100% after Superficial one-shot enhancement.

This makes it possible, for the first time, to embed verification directly into AI workflows and guarantee reliability at machine speed.

Accuracy Score

Source: Superfacts Benchmark (benchmarks.superficial.org). Real-world accuracy depends on source availability and domain complexity.

How we got here: Overcoming the brittleness of symbolic systems

Symbolic AI has long offered the promise of logic, traceability, and interpretability. But in practice, symbolic systems have proven brittle. They rely on predefined rules, making them fragile in real-world settings where language is ambiguous, inputs are noisy, and new concepts appear constantly. This rigidity has historically limited symbolic reasoning to narrow domains with carefully structured data.

Superficial removes this barrier. By pairing neural flexibility with symbolic structure, it establishes a general-purpose verification layer that can adapt dynamically to unpredictable, open-world inputs — without sacrificing determinism or auditability.

There are two key components that make Superficial work.

1. Reliable extraction of independent claims

The first step is isolating independently verifiable claims from messy neural outputs.

Superficial accepts any AI generated response. A fine-tuned neural model parses this content and extracts a list of atomic claims — discrete, self-contained assertions that can be independently verified. Each claim includes sufficient context to eliminate ambiguity and is structured for downstream logic processing.

This atomic representation is critical. It allows the system to treat each assertion individually — removing noise, reducing compounding error, and enabling high-precision reasoning. Only by reaching this level of reliable decomposition can symbolic reasoning be applied effectively downstream.

Checking Claims

2. Neural-derived symbolic grounding to enable dynamic rule setting

The second step is verification — but not with brittle, hardcoded rules.

Superficial uses neural models to dynamically generate symbolic logic components that verify each claim against trusted sources. These components are compiled into deterministic programs and evaluated using verifiable symbolic reasoning. The resulting logic is explainable, auditable, and produces consistent outcomes.

Claims are verified as Confirmed or Refuted, with full justification and traceable reference.

What makes this possible is the dynamic generation of symbolic grounding logic — driven by neural pattern recognition, but structured according to deterministic rules. This reduces the need for hand-written logic paths or domain-specific ontologies and makes symbolic reasoning robust even in novel or ambiguous contexts.

This is the core innovation behind the Superficial App and API: simple to use products that make it possible to perform symbolic reasoning over unpredictable, neural-generated content — reliably, at scale, and without brittle preconditions.

Complete Verification

App and API available now in preview

We’re opening preview access to the Superficial App and beta access to the Superficial API for selected teams and individuals.

Visit www.superficial.app to try it out or email us at contact@superficial.org to request access to the API.

We believe that the future of AI doesn’t require choosing between capability and reliability. With the right infrastructure, we can build systems that are both. The Superficial App and API are our first step on this path.