This project started from a simple frustration. So many digital systems ask for documents, but when something goes wrong, users are left without answers. We noticed that verification tools often focus on whether documents look valid, while ignoring whether the information inside actually makes sense.
As students, we’ve experienced delays, silent rejections, and long “under review” states ourselves. That made us curious: what if the system checked logic and consistency instead of just appearance? That question became the foundation of DocVerity.
What We Learned
Building DocVerity pushed us to think beyond models and features and focus on system design. We learned how small design choices — like whether a system explains its decisions — can significantly impact trust and user experience.
We also learned how complex document verification really is. Fraud isn’t always obvious or malicious-looking; often it’s subtle, inconsistent, and hidden behind real identities. Understanding this changed how we approached the problem.
How We Built It
We approached this project iteratively. We first broke the problem down into simple steps: document upload, basic authenticity checks, consistency analysis across documents, and clear output explanations.
Instead of trying to build everything at once, we focused on a realistic MVP that proves the core idea. This meant spending time designing workflows, testing assumptions, and refining how results are presented so they are understandable to non-technical users.
A significant amount of time went into thinking, experimenting, and refining the idea. Over the course of the ideathon, we spent many hours across multiple days researching, discussing edge cases, reworking the solution, and improving clarity — not just writing code, but reasoning through how this system should behave in the real world.
Challenges We Faced
One of the biggest challenges was balancing accuracy with fairness. We didn’t want a system that aggressively flags everything and creates more friction. Designing something that is cautious, explainable, and useful at the same time required careful trade-offs.
Another challenge was deciding what not to build. There were many directions this project could go, but staying focused on the core friction point was key.
Despite these challenges, the process reinforced why this problem matters and why solutions in this space need to be both technically sound and human-centered.
Log in or sign up for Devpost to join the conversation.