ConvertReference.org

Editorial & Corrections

ConvertReference is designed to be a dependable reference for calculators and conversions. That means our content needs to be clear, accurate, and transparent about how it’s created and updated. This page explains how we approach editorial standards, how we handle corrections, and how you can flag issues for review.

1. How We Create and Review Content

We aim for content that is:

  • Accurate – Calculations and conversion factors are based on established scientific and financial references, and our tools are tested against known values where practical.

  • Clear and focused – Pages are written to be concise, practical, and free of unnecessary jargon.

  • Consistent – Terminology, formatting, and explanations are standardized across similar tools where possible.

Before new pages go live or major changes are made, we:

  • Review for clarity, correctness, and consistency with existing tools.

  • Check that examples, edge cases, and explanations line up with the underlying math or logic.

  • Ensure metadata (titles, descriptions, timestamps) match the current state of the page.

2. Errors, Corrections, and Updates

If we discover an error—whether in a calculator, a conversion factor, or an explanation—we aim to:

  1. Fix the underlying issue (code, dataset entry, or text).

  2. Verify the correction against authoritative references or test cases.

  3. Update the live page and, where relevant, any related tools or documentation.

For significant corrections (for example, a wrong formula, inverted unit relationship, or misleading explanation), we may:

  • Note the change in the page’s “last updated” timestamp.

  • Clarify the correction in the text if it could materially affect how someone interprets results.

Routine edits such as minor wording improvements or layout tweaks may not be individually flagged, but they still go through the same basic review mindset.

3. Reporting Issues and Requesting Clarifications

We rely on user feedback to catch issues that automated tests or internal reviews may miss. If you notice:

  • A result that looks wrong or inconsistent

  • A confusing explanation or missing step

  • A broken example, link, or label

please let us know. The easiest way to report an issue is:

  • Open an issue in our public GitHub repository, describing:

    • The page or tool you were using

    • What you expected to see

    • What you actually saw (including example inputs, if possible)

We review all reports, prioritize anything that could affect correctness, and aim to respond or resolve issues within a reasonable timeframe.