Marcel Gläser
Lessingstraße 4
15345 Altlandsberg
Germany
Email: contact [at] howyoucode [dot] dev
X: @marcel_glaeser
Marcel Gläser (address as above)
The contents of this website are created with the greatest possible care. However, we cannot guarantee the accuracy, completeness, or timeliness of the content.
This website uses the GitHub API. GitHub is a service of GitHub Inc., 88 Colin P Kelly Jr St, San Francisco, CA 94107, USA. Usage is subject to the GitHub Terms of Service.
Marcel Gläser, Lessingstraße 4, 15345 Altlandsberg, Germany.
Email: contact [at] howyoucode [dot] dev
Code analysis runs entirely in your browser. No source code is ever sent to or stored on our servers.
This website uses the GitHub REST API and GitHub OAuth. Your browser calls the GitHub API to retrieve repository data and file contents. The OAuth flow runs through our server (token exchange), but the token is only returned to your browser. GitHub Privacy Policies apply.
This website is hosted on a server in the EU (OVH, France). The server may collect technical log data (IP address, timestamp, browser type). Analysis results are stored as JSON files on the server (username, score, statistics — no source code).
You can delete your leaderboard entry yourself at any time. To delete your stored card, contact us at the email address above. You have the right to access, rectification, deletion, and restriction of processing of your data under GDPR.
This privacy policy may be updated occasionally. The current version is always available on this page.
Last updated: February 2026
We scan your actual source code — not stars, not commit frequency, not contribution graphs. The analysis runs entirely in your browser. No code is ever sent to our servers.
Every developer has a unique fingerprint across these dimensions, each scored 0–10:
Your Code Quality Score is derived from all 8 dimensions, scaled to 0–100.
The leaderboard goes beyond raw code quality. It factors in:
We don't publish exact weights to prevent gaming. We do publish what we detect and why it matters.
We put significant effort into making the analysis fair:
A newcomer writing clean code can beat a veteran with thousands of dormant repos. We believe in measuring how you write code, not how much attention your repos get.
This is a heuristic assessment, not absolute truth. No automated tool can fully capture code quality — but we think analyzing real code beats counting stars.
The scoring system evolves as we learn. Last updated: February 2026.
Real code signals. Not stars.
We show detected patterns. No magic. Analysis runs in your browser — no code touches our servers.
language-aware scoring. Import sweet spots, naming conventions, error patterns — all calibrated per language. LOC-weighted.
We score idioms: errors.Is/As %w ? .context() guard let map_err. Not try/catch density.
tests softened. Test files, CLI entry points — risky patterns weigh less.
No bonus for more catches. Swallow/log-only hurts.
Branching isn’t “bad”. Extremes cost readability points.
proxy metric. Normalized per language. Graph analysis is roadmap.
Breadth is lightly weighted. Specialists win too.
hard-to-game. Stratified sampling + filters + LOC-weighted activity.
snapshot. Largest + representative + random.
Not your style. We ignore noise.
Score reflects what’s there. Not a moral judgment. Penalties are capped.
Tooling signals are roadmap. Code output counts now.
Minimum repos + files keeps ranks stable. Your card works regardless.
errors.Is/As %w ? .context() guard let map_errunwrap/expect panic! _ = err log-only empty catch try!More questions? Contact us.